Jan 23 13:32:38 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 13:32:38 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 13:32:38 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 13:32:39 crc kubenswrapper[4771]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.081151 4771 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084332 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084357 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084365 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084372 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084379 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084386 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084393 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084400 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084421 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084426 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084431 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084524 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084838 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084848 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084854 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084860 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084866 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084874 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084880 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084885 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084890 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084896 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084901 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084906 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084911 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084922 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084927 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084932 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084937 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084942 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084947 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084952 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084958 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084963 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084970 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084976 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084983 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.084993 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085001 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085007 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085013 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085018 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085023 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085028 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085033 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085038 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085045 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085050 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085055 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085063 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085069 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085074 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085079 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085085 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085092 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085097 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085102 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085107 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085112 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085120 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085125 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085133 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085138 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085145 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085150 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085154 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085159 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085163 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085168 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085173 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.085178 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086692 4771 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086788 4771 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086813 4771 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086827 4771 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086845 4771 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086858 4771 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086874 4771 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086889 4771 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086900 4771 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086922 4771 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086935 4771 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086948 4771 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086960 4771 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086972 4771 flags.go:64] FLAG: --cgroup-root="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086983 4771 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.086995 4771 flags.go:64] FLAG: --client-ca-file="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087007 4771 flags.go:64] FLAG: --cloud-config="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087019 4771 flags.go:64] FLAG: --cloud-provider="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087115 4771 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087130 4771 flags.go:64] FLAG: --cluster-domain="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087142 4771 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087155 4771 flags.go:64] FLAG: --config-dir="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087574 4771 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087614 4771 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087665 4771 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087678 4771 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087691 4771 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087704 4771 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087717 4771 flags.go:64] FLAG: --contention-profiling="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087728 4771 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087740 4771 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087752 4771 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087763 4771 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087778 4771 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087790 4771 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087802 4771 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087813 4771 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087825 4771 flags.go:64] FLAG: --enable-server="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087835 4771 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087853 4771 flags.go:64] FLAG: --event-burst="100" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087865 4771 flags.go:64] FLAG: --event-qps="50" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087877 4771 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087888 4771 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087899 4771 flags.go:64] FLAG: --eviction-hard="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087913 4771 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087925 4771 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087938 4771 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087951 4771 flags.go:64] FLAG: --eviction-soft="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087965 4771 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087977 4771 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.087988 4771 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088001 4771 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088012 4771 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088023 4771 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088035 4771 flags.go:64] FLAG: --feature-gates="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088051 4771 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088063 4771 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088078 4771 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088090 4771 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088102 4771 flags.go:64] FLAG: --healthz-port="10248" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088114 4771 flags.go:64] FLAG: --help="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088126 4771 flags.go:64] FLAG: --hostname-override="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088137 4771 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088149 4771 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088161 4771 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088174 4771 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088185 4771 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088197 4771 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088210 4771 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088221 4771 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088232 4771 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088244 4771 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088259 4771 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088271 4771 flags.go:64] FLAG: --kube-reserved="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088283 4771 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088294 4771 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088307 4771 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088322 4771 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088335 4771 flags.go:64] FLAG: --lock-file="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088349 4771 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088361 4771 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088376 4771 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088397 4771 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088447 4771 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088461 4771 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088472 4771 flags.go:64] FLAG: --logging-format="text" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088484 4771 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088498 4771 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088513 4771 flags.go:64] FLAG: --manifest-url="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088524 4771 flags.go:64] FLAG: --manifest-url-header="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088543 4771 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088555 4771 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088570 4771 flags.go:64] FLAG: --max-pods="110" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088582 4771 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088594 4771 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088605 4771 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088617 4771 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088629 4771 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088641 4771 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088653 4771 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088691 4771 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088705 4771 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088717 4771 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088796 4771 flags.go:64] FLAG: --pod-cidr="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088811 4771 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088830 4771 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088843 4771 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088856 4771 flags.go:64] FLAG: --pods-per-core="0" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088869 4771 flags.go:64] FLAG: --port="10250" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088882 4771 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088893 4771 flags.go:64] FLAG: --provider-id="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088905 4771 flags.go:64] FLAG: --qos-reserved="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088918 4771 flags.go:64] FLAG: --read-only-port="10255" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088931 4771 flags.go:64] FLAG: --register-node="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088944 4771 flags.go:64] FLAG: --register-schedulable="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088955 4771 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088978 4771 flags.go:64] FLAG: --registry-burst="10" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.088990 4771 flags.go:64] FLAG: --registry-qps="5" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089002 4771 flags.go:64] FLAG: --reserved-cpus="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089014 4771 flags.go:64] FLAG: --reserved-memory="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089029 4771 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089042 4771 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089054 4771 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089066 4771 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089078 4771 flags.go:64] FLAG: --runonce="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089090 4771 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089103 4771 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089116 4771 flags.go:64] FLAG: --seccomp-default="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089128 4771 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089140 4771 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089156 4771 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089169 4771 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089182 4771 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089195 4771 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089208 4771 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089220 4771 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089234 4771 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089247 4771 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089260 4771 flags.go:64] FLAG: --system-cgroups="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089272 4771 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089292 4771 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089304 4771 flags.go:64] FLAG: --tls-cert-file="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089315 4771 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089438 4771 flags.go:64] FLAG: --tls-min-version="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089457 4771 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089469 4771 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089482 4771 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089494 4771 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089508 4771 flags.go:64] FLAG: --v="2" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089525 4771 flags.go:64] FLAG: --version="false" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089542 4771 flags.go:64] FLAG: --vmodule="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089557 4771 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.089571 4771 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.089958 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.089981 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.089995 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090006 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090019 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090030 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090041 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090053 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090064 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090074 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090084 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090095 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090106 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090121 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090134 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090144 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090155 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090166 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090177 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090188 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090199 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090209 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090220 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090234 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090245 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090255 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090266 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090281 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090296 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090311 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090326 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090339 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090351 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090364 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090375 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090386 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090399 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090451 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090465 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090477 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090489 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090501 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090512 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090523 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090534 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090544 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090555 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090566 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090576 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090591 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090607 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090621 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090633 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090646 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090662 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090676 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090689 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090700 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090712 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090725 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090737 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090748 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090759 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090772 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090785 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090799 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090811 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090822 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090833 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090843 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.090855 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.090887 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.099319 4771 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.099370 4771 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099464 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099475 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099480 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099486 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099493 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099498 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099503 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099508 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099513 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099517 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099522 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099527 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099531 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099536 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099540 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099545 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099550 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099554 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099561 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099568 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099573 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099579 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099584 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099589 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099595 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099601 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099607 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099613 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099618 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099624 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099630 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099636 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099641 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099646 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099650 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099655 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099659 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099664 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099669 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099675 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099680 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099686 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099690 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099694 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099699 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099703 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099708 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099712 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099716 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099721 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099726 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099730 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099735 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099739 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099743 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099748 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099752 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099756 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099760 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099765 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099769 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099773 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099777 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099781 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099785 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099789 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099794 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099800 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099804 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099809 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099813 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.099821 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099947 4771 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099953 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099958 4771 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099963 4771 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099967 4771 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099972 4771 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099976 4771 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099980 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099984 4771 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099990 4771 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.099997 4771 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100002 4771 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100008 4771 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100014 4771 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100020 4771 feature_gate.go:330] unrecognized feature gate: Example Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100024 4771 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100028 4771 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100034 4771 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100038 4771 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100043 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100047 4771 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100052 4771 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100056 4771 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100061 4771 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100067 4771 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100072 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100077 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100084 4771 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100088 4771 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100093 4771 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100098 4771 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100102 4771 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100107 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100111 4771 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100116 4771 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100120 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100124 4771 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100129 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100133 4771 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100138 4771 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100144 4771 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100148 4771 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100154 4771 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100158 4771 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100163 4771 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100167 4771 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100172 4771 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100176 4771 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100181 4771 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100185 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100190 4771 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100194 4771 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100198 4771 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100203 4771 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100207 4771 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100212 4771 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100217 4771 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100221 4771 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100225 4771 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100231 4771 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100235 4771 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100240 4771 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100245 4771 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100250 4771 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100254 4771 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100258 4771 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100262 4771 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100266 4771 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100271 4771 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100276 4771 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.100280 4771 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.100288 4771 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.100810 4771 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.103837 4771 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.103927 4771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.104548 4771 server.go:997] "Starting client certificate rotation" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.104573 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.104862 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-14 10:47:34.667202586 +0000 UTC Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.104958 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.111574 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.112487 4771 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.115668 4771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.123855 4771 log.go:25] "Validated CRI v1 runtime API" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.138553 4771 log.go:25] "Validated CRI v1 image API" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.140503 4771 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.143332 4771 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-13-27-57-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.143366 4771 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.165555 4771 manager.go:217] Machine: {Timestamp:2026-01-23 13:32:39.164436376 +0000 UTC m=+0.186974011 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:416566bb-ab9b-4758-90c6-c01061b893a8 BootID:1e760c04-36aa-4fe4-b672-fbc6c675c4ad Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a7:92:b4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a7:92:b4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fa:41:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7a:34:ef Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d3:e1:d5 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3b:3c:01 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:15:b3:cb:96:4a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:39:3c:e8:1c:cf Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.165763 4771 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.165979 4771 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.166560 4771 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.166725 4771 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.166757 4771 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.166951 4771 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.166962 4771 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.167152 4771 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.167182 4771 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.167456 4771 state_mem.go:36] "Initialized new in-memory state store" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.167534 4771 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.168271 4771 kubelet.go:418] "Attempting to sync node with API server" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.168286 4771 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.168309 4771 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.168324 4771 kubelet.go:324] "Adding apiserver pod source" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.168336 4771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.170749 4771 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.170841 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.170902 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.170934 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.170961 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.171166 4771 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.171747 4771 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172183 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172206 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172214 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172222 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172233 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172240 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172246 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172257 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172266 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172274 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172285 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172292 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172473 4771 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.172804 4771 server.go:1280] "Started kubelet" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.173304 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.173400 4771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.173554 4771 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.174280 4771 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.174757 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.174787 4771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.174904 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:12:39.428548568 +0000 UTC Jan 23 13:32:39 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.175113 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.175125 4771 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.175138 4771 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.175140 4771 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.177565 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.178158 4771 factory.go:55] Registering systemd factory Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.178177 4771 factory.go:221] Registration of the systemd container factory successfully Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.178245 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.178306 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179026 4771 factory.go:153] Registering CRI-O factory Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179053 4771 factory.go:221] Registration of the crio container factory successfully Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179115 4771 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179135 4771 factory.go:103] Registering Raw factory Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179154 4771 manager.go:1196] Started watching for new ooms in manager Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.179175 4771 server.go:460] "Adding debug handlers to kubelet server" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.178682 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d5f6f0bf05011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 13:32:39.172780049 +0000 UTC m=+0.195317674,LastTimestamp:2026-01-23 13:32:39.172780049 +0000 UTC m=+0.195317674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.187214 4771 manager.go:319] Starting recovery of all containers Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190048 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190089 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190105 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190114 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190123 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190132 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190141 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190149 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190159 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190168 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190177 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190185 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190194 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190204 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190215 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190246 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190254 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190263 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190274 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190282 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190289 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190298 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190310 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190320 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190330 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190343 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190357 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190368 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190379 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190388 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190397 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190439 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190450 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190459 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190470 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190480 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190489 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190499 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190508 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190518 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190529 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190538 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190548 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190560 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190569 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190580 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190592 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190601 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190612 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190623 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190633 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190642 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190658 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190670 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190681 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190691 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190704 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190714 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190723 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190733 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190743 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190751 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190763 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190772 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190782 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190791 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190800 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190812 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190830 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190848 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190861 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190873 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190886 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190897 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190912 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190925 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190937 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190949 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190961 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.190972 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191012 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191025 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191037 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191048 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191061 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191079 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191091 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191104 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191118 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191129 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191141 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191153 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191164 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191175 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191186 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191196 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191205 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191216 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191225 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191234 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191245 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191256 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191265 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191275 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191290 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191301 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191312 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191322 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191334 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191346 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191357 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191367 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191377 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191388 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191397 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191422 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191434 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191448 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191458 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191468 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191479 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191488 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191499 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191510 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191521 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191529 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191539 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191548 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191557 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191567 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191577 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191587 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191598 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191608 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191618 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191628 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191637 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191646 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191655 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191664 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191673 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191684 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191692 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191701 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191713 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191721 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191729 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191738 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191747 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191758 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191766 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191775 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191784 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191792 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191801 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191811 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191820 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191829 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191840 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191871 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191881 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191892 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191903 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191916 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191929 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191943 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191954 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191966 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191977 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191988 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.191998 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192009 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192021 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192032 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192042 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192868 4771 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192896 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192907 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192917 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192926 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192937 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192946 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192955 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192965 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192973 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192982 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.192991 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193000 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193010 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193018 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193027 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193035 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193044 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193052 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193062 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193072 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193081 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193089 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193098 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193106 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193115 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193123 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193131 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193142 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193149 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193158 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193169 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193178 4771 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193187 4771 reconstruct.go:97] "Volume reconstruction finished" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.193195 4771 reconciler.go:26] "Reconciler: start to sync state" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.208206 4771 manager.go:324] Recovery completed Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.219780 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.223496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.223543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.223554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.225054 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226194 4771 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226218 4771 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226239 4771 state_mem.go:36] "Initialized new in-memory state store" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226792 4771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226844 4771 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.226871 4771 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.226921 4771 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.228870 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.228995 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.233344 4771 policy_none.go:49] "None policy: Start" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.234023 4771 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.234053 4771 state_mem.go:35] "Initializing new in-memory state store" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.275756 4771 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.282832 4771 manager.go:334] "Starting Device Plugin manager" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.282877 4771 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.282889 4771 server.go:79] "Starting device plugin registration server" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.283556 4771 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.283575 4771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.283742 4771 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.283899 4771 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.283909 4771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.290124 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.327393 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.327569 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.328672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.328722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.328742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.328919 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.329127 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.329183 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330304 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330508 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.330535 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331474 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331711 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.331843 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332210 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332283 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332365 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.332393 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.333438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.334379 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.334477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.334497 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.336467 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.336506 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.337445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.337562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.337651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.378892 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.384036 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.385446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.385494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.385510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.385612 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.386121 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.395938 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.395975 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396044 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396070 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396215 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396252 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396530 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.396588 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497859 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497883 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497924 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497967 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.497987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498017 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498081 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498073 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498133 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498137 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498190 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498036 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498166 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498430 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498481 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498521 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498568 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.498647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.586495 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.588229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.588267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.588277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.588303 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.588838 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.660097 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.673798 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.693202 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-6f762d3e3b68df9b143b38b974ea372b105074c2315c82d04cb92ec59b539b75 WatchSource:0}: Error finding container 6f762d3e3b68df9b143b38b974ea372b105074c2315c82d04cb92ec59b539b75: Status 404 returned error can't find the container with id 6f762d3e3b68df9b143b38b974ea372b105074c2315c82d04cb92ec59b539b75 Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.698750 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.699521 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-89606f1e3f60b5c56cf32f9325f6b39513281ef29886c990183c416802010470 WatchSource:0}: Error finding container 89606f1e3f60b5c56cf32f9325f6b39513281ef29886c990183c416802010470: Status 404 returned error can't find the container with id 89606f1e3f60b5c56cf32f9325f6b39513281ef29886c990183c416802010470 Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.713459 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.717016 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6be3535a9e1523f3980e3a130ffd1c3f6f8df4c2085e7b866cb1590a02d94532 WatchSource:0}: Error finding container 6be3535a9e1523f3980e3a130ffd1c3f6f8df4c2085e7b866cb1590a02d94532: Status 404 returned error can't find the container with id 6be3535a9e1523f3980e3a130ffd1c3f6f8df4c2085e7b866cb1590a02d94532 Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.719192 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.736252 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cb29b2ebf467f6cb79e8a8368e809d2b2d850bf6c97a96a261bbb91c8e6d12ea WatchSource:0}: Error finding container cb29b2ebf467f6cb79e8a8368e809d2b2d850bf6c97a96a261bbb91c8e6d12ea: Status 404 returned error can't find the container with id cb29b2ebf467f6cb79e8a8368e809d2b2d850bf6c97a96a261bbb91c8e6d12ea Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.736546 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1097a71f59c82b1fe3fcc53eb99fddc35376049f58ff423f4a2ccd09d351f7f3 WatchSource:0}: Error finding container 1097a71f59c82b1fe3fcc53eb99fddc35376049f58ff423f4a2ccd09d351f7f3: Status 404 returned error can't find the container with id 1097a71f59c82b1fe3fcc53eb99fddc35376049f58ff423f4a2ccd09d351f7f3 Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.780608 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Jan 23 13:32:39 crc kubenswrapper[4771]: W0123 13:32:39.974333 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.974501 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.988902 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.990248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.990276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.990286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:39 crc kubenswrapper[4771]: I0123 13:32:39.990309 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:39 crc kubenswrapper[4771]: E0123 13:32:39.990730 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Jan 23 13:32:40 crc kubenswrapper[4771]: W0123 13:32:40.111752 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.111839 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.175525 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:50:53.650530419 +0000 UTC Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.176089 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.234167 4771 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc" exitCode=0 Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.234282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.234398 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1097a71f59c82b1fe3fcc53eb99fddc35376049f58ff423f4a2ccd09d351f7f3"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.234589 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.237183 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.237228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.237240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.239687 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.239753 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb29b2ebf467f6cb79e8a8368e809d2b2d850bf6c97a96a261bbb91c8e6d12ea"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.241936 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422" exitCode=0 Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.242023 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.242058 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6be3535a9e1523f3980e3a130ffd1c3f6f8df4c2085e7b866cb1590a02d94532"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.242194 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.243635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.243677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.243692 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.247055 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d340e0501f30c5bcd7dba070628ffeee5cda2eba6c5c2ea1f51bcdd516d8ea4f" exitCode=0 Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.247114 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d340e0501f30c5bcd7dba070628ffeee5cda2eba6c5c2ea1f51bcdd516d8ea4f"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.247192 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89606f1e3f60b5c56cf32f9325f6b39513281ef29886c990183c416802010470"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.247333 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248900 4771 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d" exitCode=0 Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248934 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.248958 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6f762d3e3b68df9b143b38b974ea372b105074c2315c82d04cb92ec59b539b75"} Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.249037 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.249999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.250023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.250034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.251681 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.252995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.253014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.253024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.255007 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d5f6f0bf05011 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 13:32:39.172780049 +0000 UTC m=+0.195317674,LastTimestamp:2026-01-23 13:32:39.172780049 +0000 UTC m=+0.195317674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.582291 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Jan 23 13:32:40 crc kubenswrapper[4771]: W0123 13:32:40.591778 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.591852 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:40 crc kubenswrapper[4771]: W0123 13:32:40.746109 4771 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.243:6443: connect: connection refused Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.746185 4771 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.243:6443: connect: connection refused" logger="UnhandledError" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.791018 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.795732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.795766 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.795776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:40 crc kubenswrapper[4771]: I0123 13:32:40.795800 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:40 crc kubenswrapper[4771]: E0123 13:32:40.796472 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.243:6443: connect: connection refused" node="crc" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.175694 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:46:45.74935658 +0000 UTC Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.203121 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.253836 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.253881 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.253901 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.253912 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.255907 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6dfa803f76d28aade81fb7e6010acd6575e894117e5373e28726ceb352cb447d" exitCode=0 Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.255956 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6dfa803f76d28aade81fb7e6010acd6575e894117e5373e28726ceb352cb447d"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.256145 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.257006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.257039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.257050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.258571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.258704 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.259716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.259753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.259764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.261804 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.261844 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.261857 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.261938 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.262563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.262589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.262597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.265093 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.265154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.265171 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f"} Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.265121 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.265974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.266012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:41 crc kubenswrapper[4771]: I0123 13:32:41.266029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.175938 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:03:20.554096642 +0000 UTC Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.273091 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.273089 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92"} Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.274061 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.274121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.274143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.275692 4771 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2ed0b4e8cb1b961e0174ea9f1bc7f8ab588f9264aa7d64905428b878386578e7" exitCode=0 Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.275794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2ed0b4e8cb1b961e0174ea9f1bc7f8ab588f9264aa7d64905428b878386578e7"} Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.275823 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.276005 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.277302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.396722 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.397630 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.397673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.397687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:42 crc kubenswrapper[4771]: I0123 13:32:42.397719 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.176376 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:58:38.083148661 +0000 UTC Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280698 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e64e5c65df58bce8861106dadc59f37f1ccf964089b99c95eaa751fd2c073f7c"} Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280755 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"71f8e846115fea775d4048b8ccfadfb430bca6e572676deb8d6980d5027d13ce"} Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280785 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"491d3fbd906e45ea8204db6b69fcba11c7b4194a70153a06b9c2c54b9ad42108"} Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280796 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af4dcb31e0f762fa75230fec2937a824190bf698664e404af8b86fad981234e0"} Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.280800 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.281743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.281770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:43 crc kubenswrapper[4771]: I0123 13:32:43.281779 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.177032 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:30:38.608004476 +0000 UTC Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.287214 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fde51505521f403117319497b94b1cc2798731971ecdc47f8ef3bb51aac8f906"} Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.287401 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.288258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.288288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.288296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.563740 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.563972 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.564028 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.565568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.565603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.565613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.677558 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.748891 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.749112 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.750779 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.750814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:44 crc kubenswrapper[4771]: I0123 13:32:44.750829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.178151 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:04:09.980913854 +0000 UTC Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.268102 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.289451 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.289519 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290515 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290596 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:45 crc kubenswrapper[4771]: I0123 13:32:45.290605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.179219 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:58:07.893236975 +0000 UTC Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.219768 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.219960 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.221501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.221559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.221577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.228668 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.291846 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.291968 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293134 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:46 crc kubenswrapper[4771]: I0123 13:32:46.293328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.179928 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:20:13.403371732 +0000 UTC Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.750048 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.750171 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.982317 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.982572 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.984001 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.984047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:47 crc kubenswrapper[4771]: I0123 13:32:47.984060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.180505 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:55:39.179349385 +0000 UTC Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.399126 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.399363 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.401003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.401082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.401101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.445819 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.446022 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.447275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.447311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.447323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.709205 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.709388 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.711005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.711082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:48 crc kubenswrapper[4771]: I0123 13:32:48.711102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:49 crc kubenswrapper[4771]: I0123 13:32:49.180894 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:36:22.413312016 +0000 UTC Jan 23 13:32:49 crc kubenswrapper[4771]: E0123 13:32:49.290211 4771 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 13:32:50 crc kubenswrapper[4771]: I0123 13:32:50.181131 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:37:25.363633884 +0000 UTC Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.176394 4771 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.181632 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:07:56.500785005 +0000 UTC Jan 23 13:32:51 crc kubenswrapper[4771]: E0123 13:32:51.205008 4771 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.321287 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.321363 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.324856 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 13:32:51 crc kubenswrapper[4771]: I0123 13:32:51.324919 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.103258 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.103550 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.105265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.105330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.105354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.149354 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.182672 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:56:11.051692236 +0000 UTC Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.307271 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.308773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.308816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.308826 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:52 crc kubenswrapper[4771]: I0123 13:32:52.323777 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 13:32:53 crc kubenswrapper[4771]: I0123 13:32:53.183434 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:53:57.415930362 +0000 UTC Jan 23 13:32:53 crc kubenswrapper[4771]: I0123 13:32:53.310100 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:53 crc kubenswrapper[4771]: I0123 13:32:53.311534 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:53 crc kubenswrapper[4771]: I0123 13:32:53.311606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:53 crc kubenswrapper[4771]: I0123 13:32:53.311631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.183988 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:32:35.113834377 +0000 UTC Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.568112 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.568292 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.568654 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.568697 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.569713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.569744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.569756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.573654 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.678778 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 13:32:54 crc kubenswrapper[4771]: I0123 13:32:54.678845 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.184774 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:49:02.608668733 +0000 UTC Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.315164 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.315486 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.315543 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.316031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.316068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.316081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.330249 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.345908 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.369029 4771 csr.go:261] certificate signing request csr-wzqtk is approved, waiting to be issued Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.380071 4771 csr.go:257] certificate signing request csr-wzqtk is issued Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.793670 4771 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 13:32:55 crc kubenswrapper[4771]: I0123 13:32:55.793746 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.185920 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:38:16.832735403 +0000 UTC Jan 23 13:32:56 crc kubenswrapper[4771]: E0123 13:32:56.320369 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.321459 4771 trace.go:236] Trace[2141724836]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 13:32:43.416) (total time: 12904ms): Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[2141724836]: ---"Objects listed" error: 12904ms (13:32:56.321) Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[2141724836]: [12.904480245s] [12.904480245s] END Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.321486 4771 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.321858 4771 trace.go:236] Trace[1826796573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 13:32:43.238) (total time: 13083ms): Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[1826796573]: ---"Objects listed" error: 13083ms (13:32:56.321) Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[1826796573]: [13.083626846s] [13.083626846s] END Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.321878 4771 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.322615 4771 trace.go:236] Trace[364502992]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 13:32:43.748) (total time: 12574ms): Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[364502992]: ---"Objects listed" error: 12574ms (13:32:56.322) Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[364502992]: [12.574120011s] [12.574120011s] END Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.322663 4771 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 13:32:56 crc kubenswrapper[4771]: E0123 13:32:56.324122 4771 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.324778 4771 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.326800 4771 trace.go:236] Trace[1741483460]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 13:32:41.846) (total time: 14480ms): Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[1741483460]: ---"Objects listed" error: 14479ms (13:32:56.326) Jan 23 13:32:56 crc kubenswrapper[4771]: Trace[1741483460]: [14.480096829s] [14.480096829s] END Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.326831 4771 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.359175 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.365586 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.369302 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.381949 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 13:27:55 +0000 UTC, rotation deadline is 2026-12-11 06:40:19.490303216 +0000 UTC Jan 23 13:32:56 crc kubenswrapper[4771]: I0123 13:32:56.382022 4771 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7721h7m23.108283431s for next certificate rotation Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.179712 4771 apiserver.go:52] "Watching apiserver" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.182496 4771 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.182778 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-965tw"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183190 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183199 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183227 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183237 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183275 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.183757 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.184336 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.184448 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.184438 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.186020 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:22:15.343811521 +0000 UTC Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.186628 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.186690 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.186760 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.187847 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.188094 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.188107 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.188279 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.188350 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.188516 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.191114 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.192131 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.192152 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.201169 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5dzz5"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.201622 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.204008 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.204007 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.205741 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.205815 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.205913 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x6dcn"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.206057 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.206991 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.208784 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-z299d"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.208882 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.209182 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.209324 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.211205 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.211513 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.213140 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.213569 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.213850 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.214133 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.228627 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.246123 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.260632 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.274256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.276798 4771 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.283711 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.294757 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.303091 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.311506 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.320342 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.321742 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92" exitCode=255 Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.321803 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92"} Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.323548 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.326453 4771 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329731 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329841 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329862 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329884 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329911 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329937 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329964 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.329991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330019 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330043 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330066 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330090 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330092 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330117 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330145 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330169 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330198 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330223 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330246 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330262 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330272 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330304 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330362 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330388 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330427 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330433 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330452 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330481 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330507 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330535 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330560 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330624 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330654 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330678 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330703 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330727 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330761 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330788 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330811 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330871 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330903 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330959 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331015 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331039 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331065 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331088 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331119 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331168 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331194 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331221 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331246 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331352 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330508 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330506 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330565 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330652 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330793 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330810 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330840 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.330895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331038 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331033 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331045 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331065 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331523 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331063 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331555 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331230 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331239 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331362 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331599 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331671 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331810 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331814 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331834 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331862 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331885 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331948 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332013 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332052 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332086 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332064 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.331378 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332262 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332277 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332289 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332303 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332333 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332351 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332386 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332464 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332466 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332493 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332521 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332546 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332571 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332581 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332582 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332607 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332596 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332663 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332689 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332712 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332713 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332739 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332810 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332830 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332875 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332897 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332917 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332936 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332954 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332972 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332973 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332977 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.332996 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333031 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333039 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333045 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333055 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333078 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333083 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333118 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333138 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333156 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333173 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333190 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333209 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333222 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333237 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333260 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333287 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333304 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333322 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333377 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333423 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333442 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333460 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333492 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333511 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333542 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333560 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333576 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333581 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333593 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333597 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333818 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333847 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333884 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333919 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333879 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334045 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334045 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334068 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334228 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334289 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334339 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334366 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.334402 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.333590 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335602 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335629 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335654 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335681 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335711 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335739 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335763 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335788 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335814 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335838 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335862 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335888 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335913 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335945 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335968 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.335994 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336043 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336128 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336154 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336215 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336309 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336365 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336395 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336442 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336494 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336519 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336543 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336569 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336593 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336620 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336647 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336670 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336699 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336723 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336746 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336773 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336797 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336822 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336856 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336879 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336904 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336928 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336977 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337027 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337051 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337126 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337150 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337174 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337199 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337224 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337250 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.337275 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345016 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345059 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345088 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.346513 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336208 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336218 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336392 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336494 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336521 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336620 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336661 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336883 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.336938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.338390 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.338753 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.339771 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.340196 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.342553 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.342739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.342854 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343053 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343105 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343466 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343544 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343801 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.343915 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.344101 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.344663 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.344982 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345088 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.345140 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:32:57.845108642 +0000 UTC m=+18.867646437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345468 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.345799 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.346171 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.346719 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.346746 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.346843 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.347471 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.347517 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.347917 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.347907 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.348102 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.348160 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.348237 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.348769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349391 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349441 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349602 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349630 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349823 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.349554 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350211 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350310 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350339 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350365 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351108 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351178 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351286 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351320 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351347 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351373 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351401 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351455 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351484 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351511 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351536 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351608 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-k8s-cni-cncf-io\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351638 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-binary-copy\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351798 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklvc\" (UniqueName: \"kubernetes.io/projected/700ad9d9-4931-48f1-ba4c-546352bdb749-kube-api-access-pklvc\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351858 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351891 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351919 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-netns\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351941 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-conf-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351963 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-bin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352012 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-cnibin\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352062 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-daemon-config\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352107 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd8e44e1-6639-45d3-927f-347dc88e96c6-proxy-tls\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352184 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-cnibin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-socket-dir-parent\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352253 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjwn\" (UniqueName: \"kubernetes.io/projected/cd8e44e1-6639-45d3-927f-347dc88e96c6-kube-api-access-pxjwn\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352313 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-os-release\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352348 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-cni-binary-copy\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352388 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352437 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdz4\" (UniqueName: \"kubernetes.io/projected/803fce37-afd3-4ce0-9135-ccb3831e206c-kube-api-access-kvdz4\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-os-release\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8e44e1-6639-45d3-927f-347dc88e96c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b412c0bf-8f05-4214-a0a3-90ae1113bb54-hosts-file\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352618 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-system-cni-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352638 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-multus\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352682 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-hostroot\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352727 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd8e44e1-6639-45d3-927f-347dc88e96c6-rootfs\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-system-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-kubelet\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352793 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-multus-certs\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352849 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-etc-kubernetes\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgc5b\" (UniqueName: \"kubernetes.io/projected/b412c0bf-8f05-4214-a0a3-90ae1113bb54-kube-api-access-wgc5b\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354552 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355390 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350387 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350548 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350668 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356361 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350684 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350698 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350643 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350761 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.350879 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351198 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351218 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351694 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351783 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.351842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352042 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352480 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.352911 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353261 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353350 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353548 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353560 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353806 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353843 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.353961 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354020 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354512 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354529 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354735 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354757 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354694 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354857 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.354999 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355050 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355228 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355331 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355544 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.355839 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356081 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356657 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356244 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356285 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356754 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356803 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356937 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.356907 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.357224 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.357928 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.358142 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.358314 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.358436 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:57.858399594 +0000 UTC m=+18.880937229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.358615 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.358759 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.358867 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.358904 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.359247 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.358904 4771 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.359317 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:57.858665203 +0000 UTC m=+18.881202828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.359354 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.359555 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360310 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360507 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360800 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360805 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360981 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.360321 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361506 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361549 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361569 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361594 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361610 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361628 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361643 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361658 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361674 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361694 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361710 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361724 4771 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361738 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361757 4771 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361771 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361784 4771 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361802 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361816 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361831 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361845 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361863 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361877 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361890 4771 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361906 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361924 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361939 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361954 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361973 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.361987 4771 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362000 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362014 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362031 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362045 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362059 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362072 4771 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362089 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362103 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362117 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362132 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362152 4771 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362165 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362178 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362196 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362209 4771 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362222 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362236 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362254 4771 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362268 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362284 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362332 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362358 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362639 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362663 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.362680 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363188 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363211 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363237 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363293 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363309 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363324 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363338 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363357 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363372 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363385 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363400 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363434 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363449 4771 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363464 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363477 4771 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363495 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363510 4771 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363524 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363541 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.363555 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364081 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364103 4771 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364127 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364142 4771 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364156 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364170 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364189 4771 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364205 4771 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364226 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364250 4771 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364271 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364295 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364314 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.364371 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365089 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365136 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365163 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365193 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365215 4771 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365447 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365466 4771 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365484 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365506 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365523 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365544 4771 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365562 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365585 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365603 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365624 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365642 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365662 4771 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365680 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365704 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365722 4771 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365742 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365764 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365810 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365849 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365872 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.365896 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.374535 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.375313 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.375335 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.375478 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:57.87544999 +0000 UTC m=+18.897987615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.382116 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.384037 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.384355 4771 scope.go:117] "RemoveContainer" containerID="3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.384541 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.384789 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.385838 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.386104 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.386936 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.382848 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.387199 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.387221 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.387347 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:57.887316346 +0000 UTC m=+18.909853991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.389301 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.389789 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.389920 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.390815 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.390859 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.394938 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.395131 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.395360 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.396375 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.397639 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.398647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.400962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.402673 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.414679 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.419276 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.424471 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.428893 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.432753 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.433534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.456258 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.465882 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-daemon-config\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd8e44e1-6639-45d3-927f-347dc88e96c6-proxy-tls\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466547 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466569 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-cnibin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-socket-dir-parent\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466615 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjwn\" (UniqueName: \"kubernetes.io/projected/cd8e44e1-6639-45d3-927f-347dc88e96c6-kube-api-access-pxjwn\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466649 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-cni-binary-copy\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466673 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-os-release\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvdz4\" (UniqueName: \"kubernetes.io/projected/803fce37-afd3-4ce0-9135-ccb3831e206c-kube-api-access-kvdz4\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-os-release\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8e44e1-6639-45d3-927f-347dc88e96c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466777 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b412c0bf-8f05-4214-a0a3-90ae1113bb54-hosts-file\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-system-cni-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466822 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-hostroot\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466863 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-multus\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-system-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-kubelet\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-multus-certs\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466955 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-etc-kubernetes\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.466988 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd8e44e1-6639-45d3-927f-347dc88e96c6-rootfs\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgc5b\" (UniqueName: \"kubernetes.io/projected/b412c0bf-8f05-4214-a0a3-90ae1113bb54-kube-api-access-wgc5b\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467036 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-binary-copy\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467062 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467086 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklvc\" (UniqueName: \"kubernetes.io/projected/700ad9d9-4931-48f1-ba4c-546352bdb749-kube-api-access-pklvc\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467124 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-k8s-cni-cncf-io\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-conf-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467260 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-netns\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467290 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467326 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-bin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467352 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-cnibin\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467405 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467442 4771 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467458 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467472 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467484 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467497 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467512 4771 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467526 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467540 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467553 4771 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467567 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467580 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467593 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467604 4771 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467616 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467628 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467640 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467652 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467666 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467678 4771 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467702 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467715 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467729 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467743 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467757 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467770 4771 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467788 4771 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467813 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467827 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467839 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467852 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467864 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467876 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467888 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467902 4771 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467914 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467926 4771 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467939 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467952 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467965 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467979 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.467992 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468004 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468016 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468028 4771 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468040 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468055 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468068 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468081 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468096 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468110 4771 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468124 4771 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468137 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468154 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468167 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468180 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468378 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468390 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468401 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468430 4771 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468442 4771 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468454 4771 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468466 4771 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468477 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468490 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468502 4771 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468515 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468528 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468543 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468556 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468570 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468583 4771 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468597 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468611 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468625 4771 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468641 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468652 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468665 4771 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468678 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.468742 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-cnibin\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469170 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469495 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-multus\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469692 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-daemon-config\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469695 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-system-cni-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-kubelet\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-multus-certs\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-etc-kubernetes\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.469894 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/cd8e44e1-6639-45d3-927f-347dc88e96c6-rootfs\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470394 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-k8s-cni-cncf-io\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-binary-copy\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-run-netns\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-conf-dir\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470550 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-host-var-lib-cni-bin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470550 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-os-release\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470583 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470694 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-os-release\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.470995 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/700ad9d9-4931-48f1-ba4c-546352bdb749-system-cni-dir\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471044 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b412c0bf-8f05-4214-a0a3-90ae1113bb54-hosts-file\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-hostroot\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471103 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-cnibin\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471153 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/803fce37-afd3-4ce0-9135-ccb3831e206c-multus-socket-dir-parent\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471172 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/803fce37-afd3-4ce0-9135-ccb3831e206c-cni-binary-copy\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.471438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/700ad9d9-4931-48f1-ba4c-546352bdb749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.473332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8e44e1-6639-45d3-927f-347dc88e96c6-mcd-auth-proxy-config\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.476263 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd8e44e1-6639-45d3-927f-347dc88e96c6-proxy-tls\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.486816 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.493372 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvdz4\" (UniqueName: \"kubernetes.io/projected/803fce37-afd3-4ce0-9135-ccb3831e206c-kube-api-access-kvdz4\") pod \"multus-5dzz5\" (UID: \"803fce37-afd3-4ce0-9135-ccb3831e206c\") " pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.496279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklvc\" (UniqueName: \"kubernetes.io/projected/700ad9d9-4931-48f1-ba4c-546352bdb749-kube-api-access-pklvc\") pod \"multus-additional-cni-plugins-x6dcn\" (UID: \"700ad9d9-4931-48f1-ba4c-546352bdb749\") " pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.496616 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjwn\" (UniqueName: \"kubernetes.io/projected/cd8e44e1-6639-45d3-927f-347dc88e96c6-kube-api-access-pxjwn\") pod \"machine-config-daemon-z299d\" (UID: \"cd8e44e1-6639-45d3-927f-347dc88e96c6\") " pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.507874 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgc5b\" (UniqueName: \"kubernetes.io/projected/b412c0bf-8f05-4214-a0a3-90ae1113bb54-kube-api-access-wgc5b\") pod \"node-resolver-965tw\" (UID: \"b412c0bf-8f05-4214-a0a3-90ae1113bb54\") " pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.514802 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.516562 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.527938 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-965tw" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.532844 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.536939 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.555769 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.562667 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5dzz5" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.562729 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.572795 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.575804 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.584131 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbvcq"] Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.584858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.588680 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.590712 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591340 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591477 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591660 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591699 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591727 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.591832 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.603645 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.604676 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a3eb072751cca3aaa6b89bbc9a4748b918a9d0a35fef10117170b214c99937c0 WatchSource:0}: Error finding container a3eb072751cca3aaa6b89bbc9a4748b918a9d0a35fef10117170b214c99937c0: Status 404 returned error can't find the container with id a3eb072751cca3aaa6b89bbc9a4748b918a9d0a35fef10117170b214c99937c0 Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.605169 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4a96a5f321535563978fd09f1b61a248495c5b5986612473779e52cacc0972a7 WatchSource:0}: Error finding container 4a96a5f321535563978fd09f1b61a248495c5b5986612473779e52cacc0972a7: Status 404 returned error can't find the container with id 4a96a5f321535563978fd09f1b61a248495c5b5986612473779e52cacc0972a7 Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.620146 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803fce37_afd3_4ce0_9135_ccb3831e206c.slice/crio-937ec5e98181dc79f9d3b37281d47f5238ddfd534940bcda7958ec9176e59fa3 WatchSource:0}: Error finding container 937ec5e98181dc79f9d3b37281d47f5238ddfd534940bcda7958ec9176e59fa3: Status 404 returned error can't find the container with id 937ec5e98181dc79f9d3b37281d47f5238ddfd534940bcda7958ec9176e59fa3 Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.621647 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod700ad9d9_4931_48f1_ba4c_546352bdb749.slice/crio-f659d43239c7ff2f1419b49d2ea4280285dd058169b937e7c34e858fc3ab420c WatchSource:0}: Error finding container f659d43239c7ff2f1419b49d2ea4280285dd058169b937e7c34e858fc3ab420c: Status 404 returned error can't find the container with id f659d43239c7ff2f1419b49d2ea4280285dd058169b937e7c34e858fc3ab420c Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.622371 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.633880 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.637194 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8e44e1_6639_45d3_927f_347dc88e96c6.slice/crio-adcdac5164116b409c3ab789e69d0f93469caf96fb81db68e7921f7bec884676 WatchSource:0}: Error finding container adcdac5164116b409c3ab789e69d0f93469caf96fb81db68e7921f7bec884676: Status 404 returned error can't find the container with id adcdac5164116b409c3ab789e69d0f93469caf96fb81db68e7921f7bec884676 Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.645390 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.658336 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.669510 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.686735 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.699245 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.709793 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.723213 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.736239 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.748949 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.766514 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770563 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770615 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770657 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770684 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770740 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.770812 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771617 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771689 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771948 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bww\" (UniqueName: \"kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771975 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.771995 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.772028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.772048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.772080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.772102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.772267 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.781778 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.872632 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.872877 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:32:58.87285301 +0000 UTC m=+19.895390635 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873215 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873267 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873295 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873321 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873326 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873433 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.873453 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.873513 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:58.87349667 +0000 UTC m=+19.896034465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873509 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873570 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873545 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873548 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873629 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4bww\" (UniqueName: \"kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873660 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873685 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873718 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873756 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873772 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873853 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873885 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873632 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.873956 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.873969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.874007 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:58.873993546 +0000 UTC m=+19.896531351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.875967 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.879549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.879550 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.880151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.897492 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4bww\" (UniqueName: \"kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww\") pod \"ovnkube-node-qbvcq\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.918444 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:32:57 crc kubenswrapper[4771]: W0123 13:32:57.929190 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ba84e18_6300_433f_98d7_f1a2ddd0073c.slice/crio-26753b794314c9314f55ee549252c6309c081a3afd46d5e7d434727d53deb321 WatchSource:0}: Error finding container 26753b794314c9314f55ee549252c6309c081a3afd46d5e7d434727d53deb321: Status 404 returned error can't find the container with id 26753b794314c9314f55ee549252c6309c081a3afd46d5e7d434727d53deb321 Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.974647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:32:57 crc kubenswrapper[4771]: I0123 13:32:57.974715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.974858 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.974875 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.974938 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.974953 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.974884 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.975027 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:58.975006604 +0000 UTC m=+19.997544419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.975042 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:57 crc kubenswrapper[4771]: E0123 13:32:57.975099 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:32:58.975083037 +0000 UTC m=+19.997620662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.187058 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:02:35.61046301 +0000 UTC Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.227572 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.227707 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.331606 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.331646 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.331658 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"adcdac5164116b409c3ab789e69d0f93469caf96fb81db68e7921f7bec884676"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.333598 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-965tw" event={"ID":"b412c0bf-8f05-4214-a0a3-90ae1113bb54","Type":"ContainerStarted","Data":"65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.333617 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-965tw" event={"ID":"b412c0bf-8f05-4214-a0a3-90ae1113bb54","Type":"ContainerStarted","Data":"c1242f6c283be740c4ce875944d9d6e07e2e24a062c8640783268071e062fc27"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.334762 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerStarted","Data":"e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.334779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerStarted","Data":"937ec5e98181dc79f9d3b37281d47f5238ddfd534940bcda7958ec9176e59fa3"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.336917 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.336942 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a3eb072751cca3aaa6b89bbc9a4748b918a9d0a35fef10117170b214c99937c0"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.338948 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052" exitCode=0 Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.338995 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.339010 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"26753b794314c9314f55ee549252c6309c081a3afd46d5e7d434727d53deb321"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.356206 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0" exitCode=0 Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.356287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.356322 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerStarted","Data":"f659d43239c7ff2f1419b49d2ea4280285dd058169b937e7c34e858fc3ab420c"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.370079 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.379334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.379626 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.379639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e40b93cad0daca84fba65002411dff67b62eca9c354e047a24539f49a9e6022f"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.381598 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.395582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.395838 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.399773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.403059 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4a96a5f321535563978fd09f1b61a248495c5b5986612473779e52cacc0972a7"} Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.413935 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.428495 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.447755 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.459800 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.473472 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.485991 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.499121 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.511562 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.524320 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.536656 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.555939 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.573029 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.585674 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.595974 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.606775 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.620297 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.634799 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.649165 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.660529 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.686216 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.700257 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.715976 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.732489 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.752196 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:58Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.884661 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.885215 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.885346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.885574 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.885690 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:00.885676766 +0000 UTC m=+21.908214381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.885803 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:33:00.88579559 +0000 UTC m=+21.908333215 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.885897 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.885968 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:00.885961365 +0000 UTC m=+21.908498980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.986018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:32:58 crc kubenswrapper[4771]: I0123 13:32:58.986088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986239 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986257 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986272 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986319 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:00.986303721 +0000 UTC m=+22.008841346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986686 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986699 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986707 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:58 crc kubenswrapper[4771]: E0123 13:32:58.986732 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:00.986723895 +0000 UTC m=+22.009261520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.105879 4771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.188145 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:55:43.002231243 +0000 UTC Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.230070 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.230210 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.230309 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.230354 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.236183 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.237208 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.238778 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.239509 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.240814 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.243734 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.244626 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.245317 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.246546 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.247077 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.248311 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.249213 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.250677 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.251594 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.253044 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.253830 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.254720 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.255859 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.256614 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.257429 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.258727 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.258841 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.259332 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.260366 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.261333 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.262422 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.263342 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.264058 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.264579 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.265214 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.265740 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.266235 4771 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.266336 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.269098 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.269651 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.270566 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.272081 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.272725 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.273614 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.274280 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.275321 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.275814 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.276748 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.277363 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.278766 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.279224 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.280179 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.280767 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.281904 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.282499 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.283446 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.283907 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.294461 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.294656 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.295132 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.296233 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.321660 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.339616 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.366436 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.381472 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.400747 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.421978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.422297 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.422314 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.425445 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerStarted","Data":"6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.430276 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.449773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.480524 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.508462 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.524548 4771 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.526366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.526391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.526400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.526520 4771 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.533301 4771 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.533615 4771 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.534880 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.534927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.534940 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.534961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.534974 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.536445 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.554454 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.562031 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.565435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.565477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.565487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.565501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.565510 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.576205 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.585105 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.588008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.588026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.588034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.588047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.588056 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.590098 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.601711 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604464 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.604805 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.620420 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.622669 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.626973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.627009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.627021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.627038 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.627051 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.635769 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.638849 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: E0123 13:32:59.638989 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.640614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.640648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.640658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.640673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.640683 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.659908 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.673333 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.689765 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.704474 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.717894 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.739312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.743163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.743205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.743215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.743250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.743277 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.757552 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.770330 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.845551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.845583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.845591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.845606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.845616 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.873523 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-gnfrx"] Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.874159 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.878787 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.879393 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.879442 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.879442 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.901195 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.910884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-serviceca\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.910975 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54vbk\" (UniqueName: \"kubernetes.io/projected/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-kube-api-access-54vbk\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.911051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-host\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.915921 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.946801 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.947657 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.947701 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.947717 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.947735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.947748 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:32:59Z","lastTransitionTime":"2026-01-23T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:32:59 crc kubenswrapper[4771]: I0123 13:32:59.987923 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:32:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.011942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-host\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.012027 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-serviceca\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.012049 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-host\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.012073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54vbk\" (UniqueName: \"kubernetes.io/projected/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-kube-api-access-54vbk\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.013034 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-serviceca\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.033654 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.050354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.050435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.050453 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.050490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.050501 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.060132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54vbk\" (UniqueName: \"kubernetes.io/projected/6b97569b-da05-4b9b-826f-f4ffc7efb2fa-kube-api-access-54vbk\") pod \"node-ca-gnfrx\" (UID: \"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\") " pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.090572 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.124232 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.152530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.152594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.152604 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.152617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.152629 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.175539 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.190288 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:02:47.742305342 +0000 UTC Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.192607 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-gnfrx" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.205570 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.227635 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.227785 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.245418 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.256181 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.256219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.256429 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.256448 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.256462 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: W0123 13:33:00.268982 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b97569b_da05_4b9b_826f_f4ffc7efb2fa.slice/crio-27b4cbbb0967996b7a2a97d841a1fa495057d955e1b61a9b0824b9fecd73d363 WatchSource:0}: Error finding container 27b4cbbb0967996b7a2a97d841a1fa495057d955e1b61a9b0824b9fecd73d363: Status 404 returned error can't find the container with id 27b4cbbb0967996b7a2a97d841a1fa495057d955e1b61a9b0824b9fecd73d363 Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.286623 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.326005 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.359319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.359373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.359385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.359432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.359458 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.366638 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.404630 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.431977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.432015 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.432027 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.433699 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21" exitCode=0 Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.433757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.434320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gnfrx" event={"ID":"6b97569b-da05-4b9b-826f-f4ffc7efb2fa","Type":"ContainerStarted","Data":"27b4cbbb0967996b7a2a97d841a1fa495057d955e1b61a9b0824b9fecd73d363"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.449822 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.461839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.461878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.461889 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.461906 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.461917 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.490756 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.529054 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.565608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.565661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.565674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.565695 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.565708 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.566072 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.614802 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.647483 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.668076 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.668117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.668126 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.668142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.668153 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.685297 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.734863 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.763295 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.770086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.770117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.770128 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.770143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.770157 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.806925 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.847482 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.872014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.872059 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.872068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.872083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.872095 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.886752 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.921174 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.921332 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.921372 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:33:04.921349905 +0000 UTC m=+25.943887540 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.921404 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.921458 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.921464 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:04.921451978 +0000 UTC m=+25.943989603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.921563 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:00 crc kubenswrapper[4771]: E0123 13:33:00.921597 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:04.921587683 +0000 UTC m=+25.944125318 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.935012 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975069 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975092 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975108 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:00Z","lastTransitionTime":"2026-01-23T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:00 crc kubenswrapper[4771]: I0123 13:33:00.975668 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:00Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.022662 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.022721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022847 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022865 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022876 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022903 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022944 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022959 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.022928 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:05.022914901 +0000 UTC m=+26.045452526 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.023038 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:05.023014363 +0000 UTC m=+26.045551998 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.077936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.077980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.077992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.078006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.078015 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.181161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.181259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.181278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.181301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.181317 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.190618 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:26:32.338663466 +0000 UTC Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.227932 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.228102 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.228356 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:01 crc kubenswrapper[4771]: E0123 13:33:01.228673 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.283726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.283776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.283789 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.283810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.283824 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.387090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.387167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.387194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.387233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.387258 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.439470 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-gnfrx" event={"ID":"6b97569b-da05-4b9b-826f-f4ffc7efb2fa","Type":"ContainerStarted","Data":"4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.443790 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68" exitCode=0 Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.443883 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.447395 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.460134 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.479271 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.489570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.489605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.489615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.489632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.489643 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.493861 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.508206 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.526775 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.542380 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.559955 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.572472 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.584434 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.592843 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.592876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.592885 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.592903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.592914 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.601029 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.615667 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.631688 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.645724 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.656168 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.668175 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.681612 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.695647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.695673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.695681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.695694 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.695703 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.699149 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.711214 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.727257 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.768273 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.797887 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.797937 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.797949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.797964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.797974 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.804987 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.851606 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.886249 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.899874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.899903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.899912 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.899925 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.899935 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:01Z","lastTransitionTime":"2026-01-23T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.930255 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:01 crc kubenswrapper[4771]: I0123 13:33:01.964702 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.002571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.002610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.002621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.002636 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.002648 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.018444 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.052353 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.087537 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.105445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.105514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.105535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.105560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.105579 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.191457 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:51:57.370218546 +0000 UTC Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.208730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.208776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.208789 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.208806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.208819 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.227977 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:02 crc kubenswrapper[4771]: E0123 13:33:02.228114 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.311516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.311588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.311614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.311644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.311668 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.414733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.414797 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.414814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.414841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.414862 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.458975 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.463675 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31" exitCode=0 Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.463724 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.483656 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.501041 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.517705 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.517757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.517773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.517791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.517804 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.525695 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.539131 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.552113 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.564607 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.580958 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.592214 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.605209 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.616560 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.620459 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.620508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.620520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.620541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.620568 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.628857 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.642675 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.655271 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.676201 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.723286 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.723326 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.723339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.723361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.723378 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.833295 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.833343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.833353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.833370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.833382 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.936194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.936255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.936267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.936283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:02 crc kubenswrapper[4771]: I0123 13:33:02.936294 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:02Z","lastTransitionTime":"2026-01-23T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.039752 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.039816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.039827 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.039846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.039861 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.142701 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.142750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.142760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.142785 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.142796 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.191925 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:04:45.070883977 +0000 UTC Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.227349 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.227396 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:03 crc kubenswrapper[4771]: E0123 13:33:03.227515 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:03 crc kubenswrapper[4771]: E0123 13:33:03.227610 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.245333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.245372 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.245382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.245397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.245422 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.348541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.348632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.348642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.348664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.348677 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.451822 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.451873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.451890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.451909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.451924 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.474804 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerStarted","Data":"13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.488677 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.506588 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.519554 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.535962 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.553088 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.555203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.555240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.555252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.555270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.555284 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.569919 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.584240 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.595514 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.613882 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.625533 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.639834 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.656052 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.657913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.657949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.657961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.657978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.657990 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.671800 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.684208 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:03Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.761113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.761531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.761544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.761562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.761588 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.864477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.864524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.864535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.864549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.864559 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.966684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.966721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.966732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.966746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:03 crc kubenswrapper[4771]: I0123 13:33:03.966755 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:03Z","lastTransitionTime":"2026-01-23T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.071231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.071287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.071298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.071315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.071327 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.173667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.173724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.173736 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.173749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.173760 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.193002 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:35:12.585005961 +0000 UTC Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.227477 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.227735 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.276784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.276822 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.276832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.276851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.276862 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.380263 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.380328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.380340 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.380362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.380376 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.482042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.482124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.482141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.482160 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.482174 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.483817 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.484233 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.484256 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.497339 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f" exitCode=0 Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.497389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.500188 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.519730 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.519797 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.519825 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.537256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.552779 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.573766 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.585075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.585163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.585171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.585188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.585197 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.590525 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.602987 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.615660 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.631230 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.647048 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.662351 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.678251 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.688273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.688323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.688338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.688359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.688371 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.694682 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.710606 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.726219 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.741210 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.755877 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.769801 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.785379 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.790669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.790735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.790750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.790770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.790782 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.800996 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.826818 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.840691 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.855690 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.868586 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.880584 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.893811 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.894232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.894270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.894285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.894302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.894625 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.906975 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.920281 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:04Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.959932 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.960082 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.960091 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:33:12.960065498 +0000 UTC m=+33.982603123 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.960211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.960264 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.960267 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.960327 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:12.960314947 +0000 UTC m=+33.982852572 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:04 crc kubenswrapper[4771]: E0123 13:33:04.960362 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:12.960336967 +0000 UTC m=+33.982874632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.997207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.997276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.997301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.997331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:04 crc kubenswrapper[4771]: I0123 13:33:04.997360 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:04Z","lastTransitionTime":"2026-01-23T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.060993 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.061100 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061167 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061196 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061208 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061250 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061273 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061288 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061257 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:13.061241927 +0000 UTC m=+34.083779552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.061354 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:13.06133674 +0000 UTC m=+34.083874385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.099680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.099725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.099738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.099757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.099772 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.193633 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:57:28.919804705 +0000 UTC Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.202927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.202977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.202987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.203006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.203016 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.227343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.227523 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.227625 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:05 crc kubenswrapper[4771]: E0123 13:33:05.227838 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.305772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.305815 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.305827 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.305844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.305857 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.409783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.410247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.410264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.410284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.410300 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.505800 4771 generic.go:334] "Generic (PLEG): container finished" podID="700ad9d9-4931-48f1-ba4c-546352bdb749" containerID="bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece" exitCode=0 Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.505866 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerDied","Data":"bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.505996 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.513272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.513323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.513333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.513358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.513371 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.528579 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.549346 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.560723 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.577241 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.591918 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.614459 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.616309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.616394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.616522 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.616788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.616809 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.635882 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.653893 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.669506 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.681149 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.702935 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.716529 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.719450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.719518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.719531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.719552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.719565 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.732047 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.744009 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:05Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.822294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.822351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.822368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.822392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.822437 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.925007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.925052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.925063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.925078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:05 crc kubenswrapper[4771]: I0123 13:33:05.925090 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:05Z","lastTransitionTime":"2026-01-23T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.028330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.028382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.028403 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.028489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.028526 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.132171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.132226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.132238 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.132258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.132270 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.194587 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:17:30.905856533 +0000 UTC Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.227044 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:06 crc kubenswrapper[4771]: E0123 13:33:06.227206 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.236085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.236124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.236136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.236152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.236168 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.338729 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.338809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.338829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.338849 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.338867 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.442448 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.442508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.442521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.442541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.442560 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.513103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" event={"ID":"700ad9d9-4931-48f1-ba4c-546352bdb749","Type":"ContainerStarted","Data":"56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.513163 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.526124 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.536234 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.545510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.545565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.545582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.545606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.545626 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.550431 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.563362 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.576688 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.591291 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.612062 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.627677 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.645233 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.649105 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.649148 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.649164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.649184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.649200 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.664632 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.679379 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.715507 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.733215 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.751727 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.751773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.751783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.751800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.751811 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.753221 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:06Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.854062 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.854105 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.854115 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.854131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.854142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.956118 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.956150 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.956158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.956172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:06 crc kubenswrapper[4771]: I0123 13:33:06.956181 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:06Z","lastTransitionTime":"2026-01-23T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.058513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.058566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.058579 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.058597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.058613 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.160969 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.161012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.161021 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.161037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.161048 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.194953 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:28:47.280780662 +0000 UTC Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.227351 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.227524 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:07 crc kubenswrapper[4771]: E0123 13:33:07.227727 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:07 crc kubenswrapper[4771]: E0123 13:33:07.227877 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.263681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.263726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.263738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.263757 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.263770 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.366881 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.366924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.366942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.366958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.366969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.469733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.469794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.469804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.469820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.469832 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.572738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.572786 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.572796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.572817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.572830 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.675867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.675913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.675925 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.675938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.675949 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.778761 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.778823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.778841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.778863 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.778880 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.882008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.882058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.882066 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.882081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.882091 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.986528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.986595 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.986607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.986629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:07 crc kubenswrapper[4771]: I0123 13:33:07.986642 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:07Z","lastTransitionTime":"2026-01-23T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.089471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.089506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.089515 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.089528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.089564 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.192795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.192857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.192869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.192888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.192907 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.196061 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:51:27.274738677 +0000 UTC Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.227682 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:08 crc kubenswrapper[4771]: E0123 13:33:08.227850 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.295070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.295115 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.295127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.295145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.295157 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.397313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.397344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.397352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.397366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.397375 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.499745 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.499788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.499795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.499810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.499821 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.520596 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/0.log" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.523611 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b" exitCode=1 Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.523653 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.524236 4771 scope.go:117] "RemoveContainer" containerID="757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.536590 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.555058 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.574064 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.586822 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.601592 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.602765 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.602790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.602799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.602811 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.602821 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.615135 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.631092 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.646676 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.661319 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.673932 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.688901 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.702541 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.705864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.705915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.705929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.705948 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.705958 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.718647 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.730494 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:08Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.808992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.809029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.809038 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.809053 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.809063 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.912256 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.912293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.912302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.912316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:08 crc kubenswrapper[4771]: I0123 13:33:08.912327 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:08Z","lastTransitionTime":"2026-01-23T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.015306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.015399 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.015477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.015512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.015535 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.117966 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.118009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.118020 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.118036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.118047 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.196846 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:37:23.357104069 +0000 UTC Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.220894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.220947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.220993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.221014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.221031 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.227365 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:09 crc kubenswrapper[4771]: E0123 13:33:09.227503 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.227704 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:09 crc kubenswrapper[4771]: E0123 13:33:09.227864 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.243312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.256931 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.269618 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.287918 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.303269 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.321454 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.322215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.322236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.322245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.322260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.322277 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.337181 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.349094 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.368092 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.381799 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.394634 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.408363 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.420991 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.424560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.424611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.424623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.424639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.424649 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.433463 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.526747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.526778 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.526789 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.526802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.526813 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.533112 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/0.log" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.539758 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.539921 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.554738 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.567127 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.580235 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.593526 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.612707 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.626651 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.629308 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.629360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.629372 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.629392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.629419 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.641521 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.657083 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.697200 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.714843 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.733095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.733142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.733154 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.733176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.733189 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.742316 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.755908 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.767808 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.779315 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.836279 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.836337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.836348 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.836370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.836381 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.941060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.941108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.941120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.941142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.941155 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.963053 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp"] Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.963689 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.965777 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.965841 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.969146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.969191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.969201 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.969250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.969267 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.981768 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: E0123 13:33:09.982337 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.986668 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.986720 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.986732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.986750 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.986764 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:09Z","lastTransitionTime":"2026-01-23T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:09 crc kubenswrapper[4771]: I0123 13:33:09.998734 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.005698 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.009144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.009185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.009194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.009209 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.009220 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.012919 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.012965 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.012993 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mvj\" (UniqueName: \"kubernetes.io/projected/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-kube-api-access-92mvj\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.013036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.014309 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.020203 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.023633 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.023688 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.023699 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.023719 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.023731 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.028296 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.035456 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.038734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.038768 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.038777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.038795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.038806 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.041604 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.050456 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.050576 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.052114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.052142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.052152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.052166 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.052177 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.055395 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.066298 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.083226 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.097066 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.109869 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.114283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.114320 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.114341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mvj\" (UniqueName: \"kubernetes.io/projected/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-kube-api-access-92mvj\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.114369 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.115193 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.115596 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.121222 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.121641 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.129944 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mvj\" (UniqueName: \"kubernetes.io/projected/81ddaf2d-5008-4aeb-86aa-af7df8d3fb01-kube-api-access-92mvj\") pod \"ovnkube-control-plane-749d76644c-lsjsp\" (UID: \"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.137689 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.151773 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.154832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.154872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.154884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.154922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.154938 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.165574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.178555 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:10Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.197779 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:24:44.254499555 +0000 UTC Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.227530 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:10 crc kubenswrapper[4771]: E0123 13:33:10.227675 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.257441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.257489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.257499 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.257516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.257527 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.275895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" Jan 23 13:33:10 crc kubenswrapper[4771]: W0123 13:33:10.288701 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ddaf2d_5008_4aeb_86aa_af7df8d3fb01.slice/crio-9cf54899139c0e5149de3a23c5bcd3e8115b90b08102e2a76a1a312ebe0c11a2 WatchSource:0}: Error finding container 9cf54899139c0e5149de3a23c5bcd3e8115b90b08102e2a76a1a312ebe0c11a2: Status 404 returned error can't find the container with id 9cf54899139c0e5149de3a23c5bcd3e8115b90b08102e2a76a1a312ebe0c11a2 Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.359782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.359814 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.359823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.359837 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.359848 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.463104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.463142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.463151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.463165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.463176 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.543085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" event={"ID":"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01","Type":"ContainerStarted","Data":"9cf54899139c0e5149de3a23c5bcd3e8115b90b08102e2a76a1a312ebe0c11a2"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.566618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.566685 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.566702 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.566725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.566743 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.669498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.669570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.669588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.669612 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.669632 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.773164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.773233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.773245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.773265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.773278 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.876276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.876327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.876341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.876361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.876376 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.979687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.979874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.979973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.980064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:10 crc kubenswrapper[4771]: I0123 13:33:10.980188 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:10Z","lastTransitionTime":"2026-01-23T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.047390 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4vhqn"] Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.048360 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.048486 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.065444 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.083523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.083834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.083919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.084002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.084101 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.090870 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.109355 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.123876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhjd\" (UniqueName: \"kubernetes.io/projected/6b016d90-c27f-4401-99f4-859f3627e491-kube-api-access-wdhjd\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.123924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.124782 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.137312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.151549 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.166485 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.179886 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.186790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.186821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.186832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.186847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.186861 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.191049 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.198224 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:56:00.971760143 +0000 UTC Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.201615 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.211094 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.221153 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.225195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhjd\" (UniqueName: \"kubernetes.io/projected/6b016d90-c27f-4401-99f4-859f3627e491-kube-api-access-wdhjd\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.225244 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.225341 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.225379 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:11.725366927 +0000 UTC m=+32.747904552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.227259 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.227270 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.227374 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.227493 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.230381 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.240724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhjd\" (UniqueName: \"kubernetes.io/projected/6b016d90-c27f-4401-99f4-859f3627e491-kube-api-access-wdhjd\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.243549 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.256537 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.269356 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.289458 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.289498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.289506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.289520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.289530 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.392581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.392621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.392631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.392649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.392660 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.495479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.495513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.495522 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.495534 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.495544 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.549034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" event={"ID":"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01","Type":"ContainerStarted","Data":"37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.549101 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" event={"ID":"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01","Type":"ContainerStarted","Data":"982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.553218 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/1.log" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.553925 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/0.log" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.558292 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a" exitCode=2 Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.558338 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.558380 4771 scope.go:117] "RemoveContainer" containerID="757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.559624 4771 scope.go:117] "RemoveContainer" containerID="a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.559970 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.564451 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.587256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.598057 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.598100 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.598111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.598129 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.598142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.604398 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.616389 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.630515 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.646998 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.662058 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.676043 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.687189 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.698500 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.700304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.700344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.700353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.700367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.700380 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.710708 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.721612 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.730756 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.730953 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.731081 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:11 crc kubenswrapper[4771]: E0123 13:33:11.731140 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:12.731125996 +0000 UTC m=+33.753663621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.742481 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.753562 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.764156 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.775038 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.783166 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.793898 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.802527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.802575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.802586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.802601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.802617 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.805535 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.817578 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.831446 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.844552 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.863149 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.876391 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.886590 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.897818 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.904978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.905017 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.905026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.905043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.905055 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:11Z","lastTransitionTime":"2026-01-23T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.909744 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.928936 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.947920 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.961809 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:11 crc kubenswrapper[4771]: I0123 13:33:11.977122 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:11Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.007582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.007637 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.007649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.007670 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.007684 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.110380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.110453 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.110492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.110520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.110532 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.198534 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:59:26.817357163 +0000 UTC Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.214019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.214054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.214064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.214078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.214088 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.227741 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:12 crc kubenswrapper[4771]: E0123 13:33:12.227894 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.317651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.317716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.317749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.317781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.317807 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.421056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.421135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.421159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.421187 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.421206 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.523771 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.523838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.523861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.523891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.523913 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.565727 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/1.log" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.627756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.627810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.627819 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.627838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.627851 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.729492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.729544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.729560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.729582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.729599 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.741051 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:12 crc kubenswrapper[4771]: E0123 13:33:12.741299 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:12 crc kubenswrapper[4771]: E0123 13:33:12.741401 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:14.741368409 +0000 UTC m=+35.763906084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.832964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.833047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.833073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.833104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.833128 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.936104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.936169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.936186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.936210 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:12 crc kubenswrapper[4771]: I0123 13:33:12.936229 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:12Z","lastTransitionTime":"2026-01-23T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.039710 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.039775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.039793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.039816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.039835 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.045335 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.045488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.045571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.045693 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.045759 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:29.045738941 +0000 UTC m=+50.068276576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.045805 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.045925 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:29.045899186 +0000 UTC m=+50.068437031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.046187 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:33:29.046170455 +0000 UTC m=+50.068708260 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.142900 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.143019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.143044 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.143073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.143096 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.146934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147190 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147237 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.147244 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147262 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147455 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147491 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147508 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147550 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:29.147482958 +0000 UTC m=+50.170020623 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.147586 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:29.147571961 +0000 UTC m=+50.170109616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.198911 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:54:37.124350685 +0000 UTC Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.227704 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.227890 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.227984 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.228176 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.228451 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:13 crc kubenswrapper[4771]: E0123 13:33:13.228639 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.245619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.245647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.245659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.245671 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.245682 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.349097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.349183 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.349209 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.349244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.349272 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.451894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.451964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.451987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.452015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.452036 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.554393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.554461 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.554470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.554483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.554493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.657530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.657602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.657624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.657648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.657666 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.760529 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.760574 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.760584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.760605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.760618 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.862970 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.863034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.863055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.863085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.863107 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.965316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.965456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.965483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.965557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:13 crc kubenswrapper[4771]: I0123 13:33:13.965595 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:13Z","lastTransitionTime":"2026-01-23T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.069350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.069451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.069465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.069492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.069506 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.172221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.172274 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.172284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.172302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.172313 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.199875 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:53:51.51861235 +0000 UTC Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.227096 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:14 crc kubenswrapper[4771]: E0123 13:33:14.227229 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.275508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.275577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.275595 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.275627 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.275663 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.378239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.378284 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.378297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.378315 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.378327 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.480986 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.481052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.481070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.481103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.481121 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.582859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.582906 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.582918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.582936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.582949 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687326 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687349 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.687499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.704952 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.723854 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.740456 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.757875 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.767027 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:14 crc kubenswrapper[4771]: E0123 13:33:14.767207 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:14 crc kubenswrapper[4771]: E0123 13:33:14.767286 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:18.767264502 +0000 UTC m=+39.789802127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.777369 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.789432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.789467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.789480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.789496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.789509 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.792319 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.803486 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.823267 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.852149 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.874954 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.891947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.891986 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.891998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.892013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.892025 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.892760 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.905000 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.916296 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.926344 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.936786 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.945438 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:14Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.994590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.994647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.994657 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.994672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:14 crc kubenswrapper[4771]: I0123 13:33:14.994681 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:14Z","lastTransitionTime":"2026-01-23T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.097908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.097949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.097958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.097973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.097982 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200044 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:37:40.046025373 +0000 UTC Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.200490 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.227728 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.227799 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:15 crc kubenswrapper[4771]: E0123 13:33:15.227907 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.227957 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:15 crc kubenswrapper[4771]: E0123 13:33:15.228065 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:15 crc kubenswrapper[4771]: E0123 13:33:15.228252 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.303060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.303100 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.303112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.303125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.303135 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.406237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.406283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.406294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.406311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.406323 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.509772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.509879 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.509898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.509923 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.509941 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.612397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.612477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.612489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.612510 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.612526 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.714906 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.714942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.714951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.714983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.714995 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.817992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.818039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.818049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.818064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.818079 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.921092 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.921136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.921148 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.921165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:15 crc kubenswrapper[4771]: I0123 13:33:15.921179 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:15Z","lastTransitionTime":"2026-01-23T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.024056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.024120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.024136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.024157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.024174 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.126263 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.126297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.126307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.126320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.126329 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.200202 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:42:00.616134396 +0000 UTC Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.227276 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:16 crc kubenswrapper[4771]: E0123 13:33:16.227516 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.228341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.228381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.228435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.228449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.228460 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.331645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.331711 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.331728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.331755 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.331774 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.434280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.434345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.434360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.434386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.434438 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.537056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.537120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.537138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.537164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.537185 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.640522 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.640579 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.640592 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.640611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.640624 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.744073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.744166 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.744190 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.744225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.744249 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.847283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.847368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.847393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.847471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.847497 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.950893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.950939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.950952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.950968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:16 crc kubenswrapper[4771]: I0123 13:33:16.950981 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:16Z","lastTransitionTime":"2026-01-23T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.053922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.053965 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.053980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.053996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.054007 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.156967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.157050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.157073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.157153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.157219 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.200883 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:14:26.520057527 +0000 UTC Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.227205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.227256 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:17 crc kubenswrapper[4771]: E0123 13:33:17.227394 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.227504 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:17 crc kubenswrapper[4771]: E0123 13:33:17.227716 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:17 crc kubenswrapper[4771]: E0123 13:33:17.227886 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.260263 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.260328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.260343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.260361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.260375 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.363955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.364013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.364032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.364064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.364081 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.467005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.467078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.467108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.467141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.467160 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.569770 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.569827 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.569845 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.569869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.569888 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.673481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.673527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.673543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.673565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.673581 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.776964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.777022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.777044 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.777067 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.777084 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.879563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.879610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.879620 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.879637 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.879647 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.982860 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.982922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.982935 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.982950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:17 crc kubenswrapper[4771]: I0123 13:33:17.982961 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:17Z","lastTransitionTime":"2026-01-23T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.085941 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.085995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.086008 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.086029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.086042 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.189109 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.189157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.189170 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.189194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.189206 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.201735 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:31:59.104973718 +0000 UTC Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.227285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:18 crc kubenswrapper[4771]: E0123 13:33:18.227467 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.291908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.291962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.291973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.291988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.292002 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.395813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.395894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.395918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.395947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.395969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.499587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.499674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.499697 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.499728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.499754 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.603012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.603075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.603119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.603143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.603165 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.705516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.705583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.705605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.705634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.705656 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.808632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.808693 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.808703 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.808722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.808736 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.813275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:18 crc kubenswrapper[4771]: E0123 13:33:18.813466 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:18 crc kubenswrapper[4771]: E0123 13:33:18.813554 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:26.813529229 +0000 UTC m=+47.836066884 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.911600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.911649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.911660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.911677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:18 crc kubenswrapper[4771]: I0123 13:33:18.911690 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:18Z","lastTransitionTime":"2026-01-23T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.014518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.014562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.014573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.014589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.014600 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.117520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.117584 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.117594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.117610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.117621 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.202797 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:38:08.379726877 +0000 UTC Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.220576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.220622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.220639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.220663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.220680 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.227758 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.227827 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:19 crc kubenswrapper[4771]: E0123 13:33:19.228076 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.228388 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:19 crc kubenswrapper[4771]: E0123 13:33:19.228627 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:19 crc kubenswrapper[4771]: E0123 13:33:19.228890 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.246576 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.260568 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.282498 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.303164 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.322982 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.323055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.323074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.323098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.323117 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.328295 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.350469 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.366574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.382295 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.401716 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.417921 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.424964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.425002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.425012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.425027 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.425038 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.436506 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.451620 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.472525 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://757a68a17c1075545cba34fa12a327fcc4a6cde064b8579287ad5504740d402b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:07Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 13:33:07.529398 6023 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 13:33:07.529446 6023 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 13:33:07.529496 6023 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 13:33:07.529510 6023 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 13:33:07.529516 6023 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 13:33:07.529529 6023 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 13:33:07.529536 6023 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 13:33:07.529539 6023 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 13:33:07.529547 6023 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 13:33:07.529558 6023 factory.go:656] Stopping watch factory\\\\nI0123 13:33:07.529556 6023 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 13:33:07.529555 6023 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 13:33:07.529565 6023 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:07.529584 6023 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 13:33:07.529596 6023 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.486914 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.497599 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.509066 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.527329 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.527377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.527390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.527432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.527444 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.630213 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.630257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.630266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.630281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.630293 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.732901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.732933 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.732941 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.732954 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.732963 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.836668 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.836725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.836748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.836779 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.836803 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.939190 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.939259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.939278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.939306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:19 crc kubenswrapper[4771]: I0123 13:33:19.939386 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:19Z","lastTransitionTime":"2026-01-23T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.045797 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.045852 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.045884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.045913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.045931 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.150305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.150352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.150363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.150379 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.150392 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.203055 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:26:53.290865862 +0000 UTC Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.227231 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.227372 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.253176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.253223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.253238 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.253259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.253274 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.356114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.356158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.356170 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.356184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.356196 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.385866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.385924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.385939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.385956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.385969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.407401 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:20Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.412435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.412476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.412485 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.412502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.412511 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.429741 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:20Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.434039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.434076 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.434085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.434099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.434110 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.452967 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:20Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.457088 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.457151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.457174 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.457205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.457224 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.469615 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:20Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.473885 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.473943 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.473961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.473988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.474007 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.490641 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:20Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:20 crc kubenswrapper[4771]: E0123 13:33:20.490835 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.493722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.493780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.493795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.493813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.493826 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.596392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.596466 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.596479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.596500 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.596513 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.698538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.698589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.698601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.698617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.698630 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.801414 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.801467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.801475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.801488 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.801501 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.904635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.904687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.904699 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.904721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:20 crc kubenswrapper[4771]: I0123 13:33:20.904735 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:20Z","lastTransitionTime":"2026-01-23T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.007638 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.007784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.007807 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.007830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.007847 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.111156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.111255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.111285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.111321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.111346 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.203648 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:56:41.414020019 +0000 UTC Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.214359 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.214397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.214405 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.214424 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.214433 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.227775 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.227867 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.227936 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:21 crc kubenswrapper[4771]: E0123 13:33:21.227889 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:21 crc kubenswrapper[4771]: E0123 13:33:21.228064 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:21 crc kubenswrapper[4771]: E0123 13:33:21.228155 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.317894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.317960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.317977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.318002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.318021 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.420692 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.420746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.420764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.420796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.420814 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.524381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.524474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.524493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.524513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.524527 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.627753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.627810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.627826 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.627847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.627862 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.731063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.731111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.731119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.731135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.731144 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.834318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.834434 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.834499 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.834527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.834548 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.937853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.937929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.937949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.937976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:21 crc kubenswrapper[4771]: I0123 13:33:21.937997 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:21Z","lastTransitionTime":"2026-01-23T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.041255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.041333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.041344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.041362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.041374 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.143613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.143662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.143677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.143699 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.143714 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.204026 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:19:07.179063746 +0000 UTC Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.228188 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:22 crc kubenswrapper[4771]: E0123 13:33:22.228421 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.245804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.245835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.245844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.245859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.245871 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.348455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.348498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.348512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.348531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.348546 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.450649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.450703 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.450716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.450738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.450752 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.553389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.553444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.553453 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.553469 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.553480 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.656139 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.656201 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.656223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.656253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.656274 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.759367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.759415 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.759431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.759463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.759475 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.861511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.861585 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.861605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.861629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.861646 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.964847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.964907 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.964924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.964954 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:22 crc kubenswrapper[4771]: I0123 13:33:22.964971 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:22Z","lastTransitionTime":"2026-01-23T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.068199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.068288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.068306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.068332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.068346 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.171588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.171661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.171683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.171736 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.171751 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.204636 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:58:29.021612894 +0000 UTC Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.228279 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.228343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.228662 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:23 crc kubenswrapper[4771]: E0123 13:33:23.228660 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:23 crc kubenswrapper[4771]: E0123 13:33:23.228751 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:23 crc kubenswrapper[4771]: E0123 13:33:23.229144 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.229647 4771 scope.go:117] "RemoveContainer" containerID="a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.255308 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.270365 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.274474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.274511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.274521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.274536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.274547 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.292802 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.309387 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.322001 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.337820 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.354877 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.373245 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.379191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.379236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.379249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.379268 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.379287 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.392809 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.404923 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.425386 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.437042 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.448666 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.458995 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.471875 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482287 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.482456 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.585215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.585261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.585274 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.585293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.585310 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.615040 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/1.log" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.617892 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.618095 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.631054 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.642932 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.654805 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.665531 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.678557 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.687683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.687725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.687735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.687753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.687766 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.699589 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.717881 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.727582 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.737452 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.748564 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.758655 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.769567 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.779741 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.790108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.790152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.790161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.790175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.790185 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.792043 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.804055 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.817428 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:23Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.893191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.893248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.893264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.893511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.893535 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.996294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.996332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.996342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.996355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:23 crc kubenswrapper[4771]: I0123 13:33:23.996364 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:23Z","lastTransitionTime":"2026-01-23T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.099526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.099587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.099605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.099628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.099646 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.201817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.201864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.201877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.201894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.201906 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.205027 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:05:48.113013327 +0000 UTC Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.227729 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:24 crc kubenswrapper[4771]: E0123 13:33:24.227932 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.304899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.304938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.304950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.304965 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.304977 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.408074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.408137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.408149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.408169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.408183 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.511054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.511110 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.511122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.511137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.511154 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.614013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.614050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.614060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.614072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.614082 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.622895 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/2.log" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.623551 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/1.log" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.626310 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" exitCode=1 Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.626363 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.626426 4771 scope.go:117] "RemoveContainer" containerID="a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.627227 4771 scope.go:117] "RemoveContainer" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" Jan 23 13:33:24 crc kubenswrapper[4771]: E0123 13:33:24.627424 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.642821 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.656455 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.672028 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.685843 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.697917 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.711464 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.716618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.716673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.716690 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.716723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.716756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.727798 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.748647 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a526ea0c91ec0a64848819d14ef17228b8f1e079b4f45513e0587e0c706a5e2a\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2026-01-23T13:33:10Z\\\",\\\"message\\\":\\\", \\\\\\\"InactiveExitTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"InactiveExitTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"InvocationID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"ay\\\\\\\"}, value:[]uint8{}}, \\\\\\\"Job\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"(uo)\\\\\\\"}, value:[]interface {}{0x0, \\\\\\\"/\\\\\\\"}}, \\\\\\\"StateChangeTimestamp\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x6490e2e96c78a}, \\\\\\\"StateChangeTimestampMonotonic\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"t\\\\\\\"}, value:0x16f886}, \\\\\\\"SubState\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"active\\\\\\\"}}, []string{\\\\\\\"Conditions\\\\\\\", \\\\\\\"Asserts\\\\\\\"}}, Sequence:0x11}\\\\nI0123 13:33:10.289926 6193 udn_isolation.go:361] D-Bus event received: \\\\u0026dbus.Signal{Sender:\\\\\\\"org.freedesktop.systemd1\\\\\\\", Path:\\\\\\\"/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket\\\\\\\", Name:\\\\\\\"org.freedesktop.DBus.Properties.PropertiesChanged\\\\\\\", Body:[]interface {}{\\\\\\\"org.freedesktop.systemd1.Socket\\\\\\\", map[string]dbus.Variant{\\\\\\\"ControlPID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0x0}, \\\\\\\"GID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}, \\\\\\\"Result\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"s\\\\\\\"}, value:\\\\\\\"success\\\\\\\"}, \\\\\\\"UID\\\\\\\":dbus.Variant{sig:dbus.Signature{str:\\\\\\\"u\\\\\\\"}, value:0xffffffff}}, []string{\\\\\\\"ExecStartPre\\\\\\\", \\\\\\\"ExecStartPost\\\\\\\", \\\\\\\"ExecStopPre\\\\\\\", \\\\\\\"ExecStopPost\\\\\\\"}}, Sequence:0x12}\\\\nI0123 13:33:10.289963 6193 ud\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.760220 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.772838 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.786980 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.797564 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.811343 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.818678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.818733 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.818749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.818773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.818791 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.827167 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.840497 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.853303 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:24Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.921890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.921929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.921937 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.921950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:24 crc kubenswrapper[4771]: I0123 13:33:24.921960 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:24Z","lastTransitionTime":"2026-01-23T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.024955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.025030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.025052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.025083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.025104 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.128091 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.128189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.128211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.128238 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.128262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.205568 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:21:47.59133921 +0000 UTC Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.228076 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.228126 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.228182 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:25 crc kubenswrapper[4771]: E0123 13:33:25.228312 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:25 crc kubenswrapper[4771]: E0123 13:33:25.228516 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:25 crc kubenswrapper[4771]: E0123 13:33:25.228798 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.231111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.231176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.231204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.231232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.231255 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.304453 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.334676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.334747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.334772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.334809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.334830 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.438063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.438117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.438133 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.438188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.438203 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.541325 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.541376 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.541391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.541413 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.541459 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.632654 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/2.log" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.636917 4771 scope.go:117] "RemoveContainer" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" Jan 23 13:33:25 crc kubenswrapper[4771]: E0123 13:33:25.637170 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.643850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.643926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.643955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.643987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.644010 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.657305 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.671304 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.693360 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.716702 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.731801 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.747022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.747095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.747113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.747143 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.747162 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.749825 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.765553 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.783780 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.803754 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.820552 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.843882 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.850680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.850759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.850783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.850813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.850833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.861955 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.886698 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.901710 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.915978 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.931073 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:25Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.955122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.955176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.955192 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.955213 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:25 crc kubenswrapper[4771]: I0123 13:33:25.955230 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:25Z","lastTransitionTime":"2026-01-23T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.058501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.058562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.058581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.058608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.058627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.162509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.162593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.162603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.162617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.162630 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.206849 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:32:12.966828389 +0000 UTC Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.227726 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:26 crc kubenswrapper[4771]: E0123 13:33:26.227908 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.266157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.266220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.266239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.266265 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.266287 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.369293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.369336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.369346 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.369361 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.369373 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.472628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.472693 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.472708 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.473254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.473307 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.576357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.576420 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.576457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.576482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.576499 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.679926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.680043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.680068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.680098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.680121 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.783483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.783542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.783555 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.783578 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.783592 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.886735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.887142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.887380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.887645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.887927 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.899263 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:26 crc kubenswrapper[4771]: E0123 13:33:26.899526 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:26 crc kubenswrapper[4771]: E0123 13:33:26.899617 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:33:42.899596544 +0000 UTC m=+63.922134169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.991381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.991483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.991509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.991530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:26 crc kubenswrapper[4771]: I0123 13:33:26.991546 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:26Z","lastTransitionTime":"2026-01-23T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.095037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.095088 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.095105 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.095124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.095142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.199069 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.199157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.199180 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.199208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.199236 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.207601 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:41:33.423359652 +0000 UTC Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.228103 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.228258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:27 crc kubenswrapper[4771]: E0123 13:33:27.228524 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.228581 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:27 crc kubenswrapper[4771]: E0123 13:33:27.228850 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:27 crc kubenswrapper[4771]: E0123 13:33:27.228970 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.302042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.302094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.302112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.302149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.302167 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.405524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.405580 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.405593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.405611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.405623 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.508206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.508290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.508317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.508349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.508369 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.611834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.611907 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.611919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.611939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.611953 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.714103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.714177 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.714199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.714226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.714249 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.817502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.817532 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.817544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.817558 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.817569 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.920747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.920806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.920820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.920839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:27 crc kubenswrapper[4771]: I0123 13:33:27.921234 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:27Z","lastTransitionTime":"2026-01-23T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.025071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.025119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.025138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.025164 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.025184 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.128163 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.128538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.128611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.128726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.128785 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.207757 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:49:10.759113863 +0000 UTC Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.227811 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:28 crc kubenswrapper[4771]: E0123 13:33:28.228014 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.232621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.232671 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.232691 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.232713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.232730 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.335568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.335641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.335661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.335685 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.335704 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.438387 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.438443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.438453 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.438471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.438482 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.450413 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.462301 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.469118 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.483264 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.497849 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.512928 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.524718 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.541120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.541174 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.541185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.541200 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.541211 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.542918 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.555212 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.566797 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.580898 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.591127 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.607108 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.618161 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.630525 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.642691 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.644912 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.644970 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.644981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.644996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.645006 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.659145 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.671929 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:28Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.748177 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.748241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.748266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.748292 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.748307 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.851070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.851152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.851178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.851198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.851213 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.954291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.954337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.954349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.954366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:28 crc kubenswrapper[4771]: I0123 13:33:28.954380 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:28Z","lastTransitionTime":"2026-01-23T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.059030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.059355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.059484 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.059585 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.059691 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.124618 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.124753 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.124859 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:34:01.124827666 +0000 UTC m=+82.147365291 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.124881 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.124949 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:34:01.124923579 +0000 UTC m=+82.147461384 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.124994 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.125067 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.125104 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:34:01.125095885 +0000 UTC m=+82.147633710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.162951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.162994 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.163004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.163019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.163029 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.209257 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 11:19:25.865697565 +0000 UTC Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.225704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.225767 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.225900 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.225917 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.225927 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.225976 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:34:01.225962822 +0000 UTC m=+82.248500447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.226060 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.226112 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.226141 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.226228 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:34:01.22620201 +0000 UTC m=+82.248739675 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.227784 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.227840 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.227863 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.227933 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.228067 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:29 crc kubenswrapper[4771]: E0123 13:33:29.228237 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.246761 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.262699 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.265921 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.266020 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.266043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.266479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.266717 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.289668 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.308518 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.325106 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.340883 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.356184 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.369913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.369998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.370014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.370034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.370048 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.370968 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.382859 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.398362 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.413954 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.426645 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.436884 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.450032 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.461648 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.472753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.472805 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.472820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.472836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.472846 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.473191 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.487702 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:29Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.575573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.575616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.575626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.575641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.575655 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.678002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.678049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.678060 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.678079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.678091 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.782334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.782386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.782401 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.782452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.782468 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.884752 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.884794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.884804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.884819 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.884828 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.987176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.987215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.987223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.987243 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:29 crc kubenswrapper[4771]: I0123 13:33:29.987254 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:29Z","lastTransitionTime":"2026-01-23T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.089812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.089882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.089893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.089909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.089921 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.192740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.192785 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.192796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.192812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.192823 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.210322 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:38:45.785684776 +0000 UTC Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.227945 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.228144 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.296146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.296196 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.296208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.296227 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.296242 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.398795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.398844 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.398859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.398874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.398884 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.501841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.501927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.501936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.501959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.501972 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.604742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.604798 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.604812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.604831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.604843 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.707865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.707911 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.707927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.707942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.707954 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.810669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.810735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.810753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.810777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.810793 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.838632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.838679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.838692 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.838708 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.838718 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.851876 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:30Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.854953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.855017 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.855032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.855061 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.855085 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.870888 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:30Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.875063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.875101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.875112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.875132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.875146 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.888870 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:30Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.893169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.893205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.893217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.893234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.893247 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.909347 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:30Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.913142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.913173 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.913184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.913198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.913209 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.925977 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:30Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:30 crc kubenswrapper[4771]: E0123 13:33:30.926106 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.928132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.928189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.928207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.928237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:30 crc kubenswrapper[4771]: I0123 13:33:30.928259 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:30Z","lastTransitionTime":"2026-01-23T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.031097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.031146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.031157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.031174 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.031186 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.134093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.134138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.134149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.134165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.134180 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.210856 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:23:59.135488469 +0000 UTC Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.227444 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.227499 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.227534 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:31 crc kubenswrapper[4771]: E0123 13:33:31.227661 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:31 crc kubenswrapper[4771]: E0123 13:33:31.227857 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:31 crc kubenswrapper[4771]: E0123 13:33:31.227961 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.237339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.237394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.237404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.237455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.237478 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.340297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.340352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.340364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.340380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.340390 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.442922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.442968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.442980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.442995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.443008 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.547527 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.547599 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.547622 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.547654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.547675 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.651071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.651140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.651157 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.651185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.651204 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.753074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.753119 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.753131 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.753147 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.753158 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.855547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.855605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.855621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.855643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.855659 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.958398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.958519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.958545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.958580 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:31 crc kubenswrapper[4771]: I0123 13:33:31.958620 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:31Z","lastTransitionTime":"2026-01-23T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.061930 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.061976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.061987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.062004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.062016 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.164754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.164813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.164826 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.164850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.164861 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.211928 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:58:14.324312754 +0000 UTC Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.227545 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:32 crc kubenswrapper[4771]: E0123 13:33:32.227726 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.267039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.267094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.267107 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.267123 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.267136 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.369865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.369907 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.369920 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.369942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.369955 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.472704 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.472739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.472747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.472763 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.472800 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.575704 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.575749 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.575758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.575780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.575794 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.679456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.679546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.679564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.679609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.679625 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.783016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.783070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.783090 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.783108 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.783121 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.885794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.885838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.885849 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.885864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.885873 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.988475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.988550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.988559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.988575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:32 crc kubenswrapper[4771]: I0123 13:33:32.988586 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:32Z","lastTransitionTime":"2026-01-23T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.091505 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.091577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.091601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.091624 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.091636 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.194313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.194355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.194368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.194386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.194399 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.213008 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:53:27.802912741 +0000 UTC Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.227439 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.227501 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.227439 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:33 crc kubenswrapper[4771]: E0123 13:33:33.227592 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:33 crc kubenswrapper[4771]: E0123 13:33:33.227658 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:33 crc kubenswrapper[4771]: E0123 13:33:33.227836 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.298005 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.298076 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.298097 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.298125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.298147 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.401223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.401306 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.401320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.401350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.401369 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.504443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.504476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.504484 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.504498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.504508 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.607350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.607445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.607460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.607480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.607495 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.709357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.709454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.709470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.709492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.709508 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.812246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.812288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.812297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.812313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.812324 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.914660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.914698 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.914709 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.914721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:33 crc kubenswrapper[4771]: I0123 13:33:33.914731 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:33Z","lastTransitionTime":"2026-01-23T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.017168 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.017208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.017219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.017234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.017248 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.119507 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.119578 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.119591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.119613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.119627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.213467 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:49:54.352359533 +0000 UTC Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.222395 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.223004 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.223077 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.223132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.223214 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.227190 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:34 crc kubenswrapper[4771]: E0123 13:33:34.227365 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.326140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.326230 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.326253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.326282 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.326304 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.428810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.428855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.428867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.428882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.428894 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.531659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.531735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.531747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.531772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.531786 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.634908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.634951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.634960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.634976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.634987 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.737797 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.737861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.737876 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.737895 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.737906 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.840528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.840599 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.840610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.840631 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.840648 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.942930 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.942964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.942974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.942987 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:34 crc kubenswrapper[4771]: I0123 13:33:34.942996 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:34Z","lastTransitionTime":"2026-01-23T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.045199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.045260 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.045277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.045297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.045314 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.148386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.148491 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.148553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.148582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.148599 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.214033 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:07:46.002181961 +0000 UTC Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.227962 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.228083 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.227985 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:35 crc kubenswrapper[4771]: E0123 13:33:35.228176 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:35 crc kubenswrapper[4771]: E0123 13:33:35.228244 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:35 crc kubenswrapper[4771]: E0123 13:33:35.228362 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.251262 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.251321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.251331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.251355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.251367 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.354292 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.354341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.354352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.354368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.354379 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.456938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.456997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.457013 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.457035 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.457051 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.560242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.560314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.560327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.560348 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.560363 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.663396 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.663464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.663478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.663493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.663504 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.766049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.766105 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.766121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.766144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.766162 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.868697 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.868810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.868831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.868859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.868879 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.972085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.972188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.972199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.972232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:35 crc kubenswrapper[4771]: I0123 13:33:35.972253 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:35Z","lastTransitionTime":"2026-01-23T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.075138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.075211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.075224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.075247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.075262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.177677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.177716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.177726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.177741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.177752 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.214627 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:11:20.863022573 +0000 UTC Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.227144 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:36 crc kubenswrapper[4771]: E0123 13:33:36.227278 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.280487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.280534 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.280546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.280563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.280575 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.382623 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.382679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.382697 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.382721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.382738 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.485112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.485170 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.485212 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.485234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.485252 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.587798 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.588091 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.588100 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.588114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.588125 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.690451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.690530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.690546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.690564 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.690574 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.792780 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.792832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.792840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.792854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.792864 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.895947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.895983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.895996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.896016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.896027 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.998601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.998724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.998752 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.998781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:36 crc kubenswrapper[4771]: I0123 13:33:36.998804 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:36Z","lastTransitionTime":"2026-01-23T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.102099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.102145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.102156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.102176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.102189 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.205651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.205705 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.205715 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.205738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.205751 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.215266 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:49:35.907579045 +0000 UTC Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.227859 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.227920 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:37 crc kubenswrapper[4771]: E0123 13:33:37.228003 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.227858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:37 crc kubenswrapper[4771]: E0123 13:33:37.228155 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:37 crc kubenswrapper[4771]: E0123 13:33:37.228377 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.308629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.308699 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.308721 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.308748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.308772 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.411434 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.411504 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.411523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.411547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.411567 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.514227 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.514271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.514304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.514327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.514342 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.616595 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.616663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.616681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.616704 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.616720 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.720102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.720152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.720165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.720182 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.720198 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.822692 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.822726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.822738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.822754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.822767 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.925801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.925848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.925856 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.925871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:37 crc kubenswrapper[4771]: I0123 13:33:37.925882 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:37Z","lastTransitionTime":"2026-01-23T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.029769 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.029833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.029847 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.029869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.029881 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.132494 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.132550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.132562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.132580 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.132591 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.215701 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:14:22.233720793 +0000 UTC Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.228075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:38 crc kubenswrapper[4771]: E0123 13:33:38.228224 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.234955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.235028 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.235041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.235057 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.235068 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.338347 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.338382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.338390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.338417 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.338427 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.440362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.440443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.440455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.440473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.440487 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.543331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.543386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.543398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.543443 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.543456 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.646385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.646477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.646524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.646545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.646558 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.751667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.751727 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.751742 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.751767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.751785 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.854608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.854667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.854676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.854696 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.854707 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.957583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.957635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.957649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.957669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:38 crc kubenswrapper[4771]: I0123 13:33:38.957682 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:38Z","lastTransitionTime":"2026-01-23T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.060470 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.060562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.060593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.060662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.060680 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.163402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.163559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.163578 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.163606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.163625 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.216005 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:13:03.778897947 +0000 UTC Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.227130 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.227167 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.227166 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:39 crc kubenswrapper[4771]: E0123 13:33:39.227254 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:39 crc kubenswrapper[4771]: E0123 13:33:39.227335 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:39 crc kubenswrapper[4771]: E0123 13:33:39.227406 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.228745 4771 scope.go:117] "RemoveContainer" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" Jan 23 13:33:39 crc kubenswrapper[4771]: E0123 13:33:39.229125 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.245367 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.260715 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.265492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.265530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.265542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.265565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.265580 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.274466 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.289855 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.303604 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.318058 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.332515 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.344433 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.358282 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.368357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.368394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.368404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.368451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.368462 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.371602 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.383142 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.395828 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.408084 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.424764 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.436031 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.446047 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.457606 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:39Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.471607 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.471649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.471659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.471675 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.471687 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.574810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.574854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.574866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.574883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.574897 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.677378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.677444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.677456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.677474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.677488 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.779502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.779562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.779573 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.779650 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.779661 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.883009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.883052 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.883061 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.883078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.883089 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.986951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.987024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.987048 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.987078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:39 crc kubenswrapper[4771]: I0123 13:33:39.987103 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:39Z","lastTransitionTime":"2026-01-23T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.089188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.089257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.089271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.089287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.089298 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.191511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.191557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.191568 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.191588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.191600 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.217254 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:35:08.098777053 +0000 UTC Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.227640 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:40 crc kubenswrapper[4771]: E0123 13:33:40.227800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.294492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.294553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.294562 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.294583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.294600 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.397964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.398011 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.398024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.398049 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.398063 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.500874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.500918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.500954 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.500971 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.500982 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.603597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.604031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.604170 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.604319 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.604500 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.707313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.707357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.707366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.707384 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.707395 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.810665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.810712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.810725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.810741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.810752 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.913231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.913289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.913301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.913318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:40 crc kubenswrapper[4771]: I0123 13:33:40.913329 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:40Z","lastTransitionTime":"2026-01-23T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.016314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.016358 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.016366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.016382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.016394 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.119545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.119620 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.119635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.119653 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.119669 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.218149 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:41:09.820740777 +0000 UTC Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.222534 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.222565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.222576 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.222593 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.222603 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.223810 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.223854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.223867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.223884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.223897 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.229014 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.229134 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.229266 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.229318 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.229706 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.229948 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.238842 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.242662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.242746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.242764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.242788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.242813 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.259519 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.263683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.263718 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.263726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.263746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.263757 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.276679 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.282626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.282659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.282668 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.282682 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.282691 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.295074 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.298502 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.298541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.298555 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.298571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.298583 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.310578 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:41Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:41 crc kubenswrapper[4771]: E0123 13:33:41.310706 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.325074 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.325105 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.325114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.325127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.325137 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.428096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.428182 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.428199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.428214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.428234 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.531628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.531703 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.531725 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.531747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.531764 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.639479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.639550 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.639575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.639616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.639640 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.742386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.742442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.742464 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.742483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.742497 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.845700 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.845738 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.845746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.845758 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.845767 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.948355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.948392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.948471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.948492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:41 crc kubenswrapper[4771]: I0123 13:33:41.948518 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:41Z","lastTransitionTime":"2026-01-23T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.051003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.051058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.051069 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.051086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.051097 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.154098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.154182 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.154192 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.154210 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.154225 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.218605 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:41:37.847448063 +0000 UTC Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.228013 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:42 crc kubenswrapper[4771]: E0123 13:33:42.228179 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.257745 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.257878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.257894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.257918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.257933 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.360222 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.360264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.360273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.360292 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.360302 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.463703 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.463747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.463756 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.463775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.463788 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.567462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.567513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.567526 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.567547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.567563 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.670707 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.670741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.670753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.670766 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.670775 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.773767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.773818 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.773830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.773854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.773865 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.876549 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.876629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.876643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.876665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.876678 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.917595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:42 crc kubenswrapper[4771]: E0123 13:33:42.917780 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:42 crc kubenswrapper[4771]: E0123 13:33:42.917859 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:34:14.917831996 +0000 UTC m=+95.940369621 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.979261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.979296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.979324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.979343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:42 crc kubenswrapper[4771]: I0123 13:33:42.979355 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:42Z","lastTransitionTime":"2026-01-23T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.082714 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.082760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.082771 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.082787 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.082800 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.185793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.185842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.185855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.185872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.185888 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.219472 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 23:22:19.363016219 +0000 UTC Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.227773 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.227782 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:43 crc kubenswrapper[4771]: E0123 13:33:43.227868 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.227983 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:43 crc kubenswrapper[4771]: E0123 13:33:43.228067 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:43 crc kubenswrapper[4771]: E0123 13:33:43.227991 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.288551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.288609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.288626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.288654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.288681 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.390891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.390933 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.390944 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.390962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.390974 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.493678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.493734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.493744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.493764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.493780 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.599891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.599945 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.599957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.599980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.599991 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.702374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.702431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.702441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.702457 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.702469 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.806111 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.806159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.806170 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.806188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.806203 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.908248 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.908297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.908310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.908324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:43 crc kubenswrapper[4771]: I0123 13:33:43.908334 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:43Z","lastTransitionTime":"2026-01-23T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.011320 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.011441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.011619 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.011648 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.011663 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.113772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.114298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.114314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.114342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.114359 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.216796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.216861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.216871 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.216890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.216901 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.220024 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:54:22.456251803 +0000 UTC Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.227529 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:44 crc kubenswrapper[4771]: E0123 13:33:44.227709 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.319369 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.319441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.319453 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.319477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.319491 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.422141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.422186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.422195 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.422211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.422222 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.525384 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.525497 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.525509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.525533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.525567 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.628236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.628281 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.628294 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.628310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.628323 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.702358 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/0.log" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.702432 4771 generic.go:334] "Generic (PLEG): container finished" podID="803fce37-afd3-4ce0-9135-ccb3831e206c" containerID="e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4" exitCode=1 Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.702467 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerDied","Data":"e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.702911 4771 scope.go:117] "RemoveContainer" containerID="e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.717740 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731594 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.731473 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.746061 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.759075 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.779237 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.793715 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.804454 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.816720 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.829988 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.836068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.836191 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.836208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.836237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.836262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.846660 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.858444 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.870403 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.882665 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.891771 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.904639 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.915259 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.925845 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:44Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.939219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.939288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.939302 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.939321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:44 crc kubenswrapper[4771]: I0123 13:33:44.939335 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:44Z","lastTransitionTime":"2026-01-23T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.042155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.042221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.042232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.042271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.042296 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.144335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.144370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.144379 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.144393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.144405 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.220346 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:27:16.739857952 +0000 UTC Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.227916 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:45 crc kubenswrapper[4771]: E0123 13:33:45.228068 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.228128 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.228166 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:45 crc kubenswrapper[4771]: E0123 13:33:45.228333 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:45 crc kubenswrapper[4771]: E0123 13:33:45.228727 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.239015 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.246774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.246846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.246863 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.246887 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.246902 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.350047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.350102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.350114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.350135 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.350150 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.452716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.452777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.452794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.452815 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.452835 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.555252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.555322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.555342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.555365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.555378 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.658677 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.658724 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.658737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.658754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.658766 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.708097 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/0.log" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.708446 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerStarted","Data":"a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.719056 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.732178 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.745315 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.756863 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.760687 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.760861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.760934 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.761047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.761107 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.768697 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.783543 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.795698 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.814784 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.830135 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.843081 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.858653 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.863139 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.863192 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.863204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.863223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.863234 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.871331 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.885095 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.896831 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.908210 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.917782 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.927807 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.939150 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:45Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.970277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.970659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.970741 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.970828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:45 crc kubenswrapper[4771]: I0123 13:33:45.970903 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:45Z","lastTransitionTime":"2026-01-23T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.074085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.074138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.074153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.074172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.074188 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.176261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.176287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.176296 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.176310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.176318 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.221326 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:32:02.021059591 +0000 UTC Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.227758 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:46 crc kubenswrapper[4771]: E0123 13:33:46.227956 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.278930 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.278972 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.278981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.278996 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.279007 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.381446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.381487 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.381498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.381512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.381524 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.483229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.483258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.483266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.483279 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.483287 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.586898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.586940 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.586951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.586991 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.587007 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.689828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.689898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.689921 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.689949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.689971 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.792997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.793037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.793046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.793063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.793077 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.895543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.895620 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.895644 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.895673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.895696 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.997809 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.997854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.997863 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.997878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:46 crc kubenswrapper[4771]: I0123 13:33:46.997891 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:46Z","lastTransitionTime":"2026-01-23T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.101010 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.101072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.101084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.101106 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.101119 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.204479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.204533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.204542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.204559 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.204571 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.221840 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:00:01.594236337 +0000 UTC Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.227144 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.227152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:47 crc kubenswrapper[4771]: E0123 13:33:47.227275 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.227400 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:47 crc kubenswrapper[4771]: E0123 13:33:47.227516 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:47 crc kubenswrapper[4771]: E0123 13:33:47.227887 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.307258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.307299 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.307314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.307343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.307363 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.410101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.410150 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.410162 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.410185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.410199 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.512615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.512656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.512666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.512680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.512692 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.615735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.615782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.615793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.615815 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.615826 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.718386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.718515 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.718538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.718561 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.718576 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.821139 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.821216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.821229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.821253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.821266 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.924317 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.924367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.924380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.924428 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:47 crc kubenswrapper[4771]: I0123 13:33:47.924446 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:47Z","lastTransitionTime":"2026-01-23T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.027565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.027621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.027637 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.027660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.027675 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.130314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.130362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.130374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.130397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.130430 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.222481 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:55:15.287440878 +0000 UTC Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.227799 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:48 crc kubenswrapper[4771]: E0123 13:33:48.227945 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.232765 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.232835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.232851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.232873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.232885 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.335976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.336031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.336041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.336064 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.336075 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.439146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.439194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.439204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.439221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.439231 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.542570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.542621 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.542636 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.542662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.542679 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.647226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.647298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.647318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.647348 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.647372 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.750031 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.750083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.750096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.750114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.750158 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.853116 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.853184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.853198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.853220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.853233 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.955947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.955992 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.956002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.956016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:48 crc kubenswrapper[4771]: I0123 13:33:48.956027 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:48Z","lastTransitionTime":"2026-01-23T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.058891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.058939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.058951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.058967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.058984 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.162395 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.162467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.162481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.162501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.162514 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.222811 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:39:07.605622739 +0000 UTC Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.227334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.227521 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.227636 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:49 crc kubenswrapper[4771]: E0123 13:33:49.227804 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:49 crc kubenswrapper[4771]: E0123 13:33:49.227889 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:49 crc kubenswrapper[4771]: E0123 13:33:49.227938 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.240261 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.253610 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.265592 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.265650 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.265664 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.265696 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.265712 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.268750 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.283312 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.303332 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.321325 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.334887 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.349670 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.364515 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.368629 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.368702 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.368713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.368737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.368749 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.375986 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.385795 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.398994 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.409924 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.422160 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.431095 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.443586 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.453635 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.465350 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:49Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.471479 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.471543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.471553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.471590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.471602 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.575915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.575966 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.575978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.576000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.576017 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.678614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.678667 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.678676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.678694 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.678705 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.782335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.782378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.782389 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.782422 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.782434 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.885236 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.885301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.885313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.885333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.885346 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.988333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.988380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.988391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.988408 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:49 crc kubenswrapper[4771]: I0123 13:33:49.988437 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:49Z","lastTransitionTime":"2026-01-23T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.091439 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.091480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.091489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.091506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.091517 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.194012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.194073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.194084 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.194098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.194109 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.223698 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:18:56.708576769 +0000 UTC Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.228005 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:50 crc kubenswrapper[4771]: E0123 13:33:50.228184 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.297198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.297931 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.297959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.297984 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.298001 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.400511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.400580 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.400594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.400614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.400627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.503509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.503560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.503571 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.503587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.503598 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.605597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.606339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.606402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.606501 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.606520 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.709022 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.709080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.709096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.709120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.709134 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.812025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.812085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.812098 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.812121 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.812136 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.915373 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.915436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.915450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.915469 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:50 crc kubenswrapper[4771]: I0123 13:33:50.915482 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:50Z","lastTransitionTime":"2026-01-23T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.019370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.019951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.019967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.020012 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.020026 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.123129 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.123235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.123254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.123279 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.123307 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.223967 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:46:12.160426627 +0000 UTC Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.225304 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.225354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.225365 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.225384 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.225397 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.227812 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.227821 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.227948 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.228012 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.227821 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.228121 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.328709 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.328779 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.328790 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.328808 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.328820 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.431255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.431307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.431318 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.431338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.431353 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.529493 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.529552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.529565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.529582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.529594 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.542925 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.547333 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.547368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.547377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.547391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.547418 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.560679 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.564473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.564521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.564555 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.564570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.564580 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.582475 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.587091 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.587137 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.587147 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.587162 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.587172 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.601101 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.605218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.605261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.605273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.605297 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.605311 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.620769 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:51Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:51 crc kubenswrapper[4771]: E0123 13:33:51.620887 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.622909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.622979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.622997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.623023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.623038 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.726223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.726277 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.726289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.726310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.726323 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.829257 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.829293 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.829301 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.829316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.829326 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.931716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.931759 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.931768 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.931783 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:51 crc kubenswrapper[4771]: I0123 13:33:51.931794 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:51Z","lastTransitionTime":"2026-01-23T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.034598 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.034639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.034652 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.034669 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.034682 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.137892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.137946 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.137957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.137976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.137986 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.224987 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 18:55:37.017735671 +0000 UTC Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.227298 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:52 crc kubenswrapper[4771]: E0123 13:33:52.227461 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.240068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.240094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.240102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.240114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.240124 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.343125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.343186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.343201 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.343226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.343246 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.445312 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.445400 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.445435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.445460 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.445477 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.548601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.548666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.548680 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.548697 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.548730 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.651823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.651869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.651882 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.651898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.651910 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.755928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.755972 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.755980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.755997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.756019 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.859280 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.859352 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.859362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.859383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.859394 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.962481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.962544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.962554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.962575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:52 crc kubenswrapper[4771]: I0123 13:33:52.962587 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:52Z","lastTransitionTime":"2026-01-23T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.065533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.065582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.065595 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.065613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.065626 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.168788 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.168823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.168833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.168846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.168855 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.225591 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:25:03.354098421 +0000 UTC Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.228180 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.228217 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:53 crc kubenswrapper[4771]: E0123 13:33:53.228454 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.228484 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:53 crc kubenswrapper[4771]: E0123 13:33:53.228667 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:53 crc kubenswrapper[4771]: E0123 13:33:53.228800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.272159 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.272215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.272226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.272245 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.272257 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.376032 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.376113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.376140 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.376172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.376196 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.484104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.484219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.484246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.484291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.484314 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.586660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.586695 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.586704 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.586718 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.586727 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.689369 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.689427 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.689438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.689454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.689473 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.792015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.792045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.792054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.792066 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.792076 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.894514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.894544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.894553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.894565 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.894575 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.996378 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.996449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.996462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.996482 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:53 crc kubenswrapper[4771]: I0123 13:33:53.996494 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:53Z","lastTransitionTime":"2026-01-23T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.099864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.099913 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.099928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.099952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.099968 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.202796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.202869 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.202883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.202899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.202913 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.226263 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:55:02.84218807 +0000 UTC Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.227501 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:54 crc kubenswrapper[4771]: E0123 13:33:54.227744 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.228840 4771 scope.go:117] "RemoveContainer" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.305103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.305146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.305158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.305177 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.305193 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.407991 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.408071 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.408080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.408093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.408102 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.511237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.511300 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.511311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.511323 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.511331 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.614217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.614275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.614287 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.614307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.614321 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.717990 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.718026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.718035 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.718050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.718061 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.740616 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/2.log" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.743906 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.744662 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.764188 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.777984 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.790007 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.813956 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.820059 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.820122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.820136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.820176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.820193 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.833371 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.856337 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.873919 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.885853 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.897211 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.908316 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.920621 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.922188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.922214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.922223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.922237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.922246 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:54Z","lastTransitionTime":"2026-01-23T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.935888 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.949289 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.961114 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.977427 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:54 crc kubenswrapper[4771]: I0123 13:33:54.995431 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:54Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.009323 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.025000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.025050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.025063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.025080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.025093 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.029771 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.127557 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.127594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.127603 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.127616 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.127624 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.227079 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:57:28.122662941 +0000 UTC Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.227471 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:55 crc kubenswrapper[4771]: E0123 13:33:55.227666 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.227874 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:55 crc kubenswrapper[4771]: E0123 13:33:55.227950 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.228069 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:55 crc kubenswrapper[4771]: E0123 13:33:55.228132 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.230924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.231025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.231046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.231065 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.231136 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.333357 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.333454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.333481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.333508 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.333528 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.435903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.436030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.436050 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.436079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.436135 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.539371 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.539430 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.539441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.539459 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.539472 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.643155 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.643194 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.643203 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.643220 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.643231 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.746338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.746382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.746391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.746424 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.746433 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.748911 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/3.log" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.749613 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/2.log" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.752225 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" exitCode=1 Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.752285 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.752341 4771 scope.go:117] "RemoveContainer" containerID="3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.753444 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:33:55 crc kubenswrapper[4771]: E0123 13:33:55.754205 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.767527 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.781987 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.794769 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.804210 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.814842 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.824075 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.837256 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.849176 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.849441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.849507 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.849569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.849627 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.854579 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.867081 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.882956 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.893806 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.917207 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3267fe9cf866781c552567d281c638e8b557a77d98821ba534a246a6c02f3adb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:24Z\\\",\\\"message\\\":\\\"factory\\\\nI0123 13:33:23.957144 6390 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 13:33:23.957191 6390 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957194 6390 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957227 6390 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 13:33:23.957252 6390 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 13:33:23.971639 6390 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 13:33:23.971659 6390 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 13:33:23.971716 6390 ovnkube.go:599] Stopped ovnkube\\\\nI0123 13:33:23.971740 6390 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 13:33:23.971823 6390 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:55Z\\\",\\\"message\\\":\\\"led to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z]\\\\nI0123 13:33:55.383729 6795 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-qbvcq\\\\nI0123 13:33:55.383619 6795 services_controller.go:434] Service openshift-route-controller-manager/route-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 754a1504-193a-42d9-b250-5d40bcccc281 4720 0 2025-02-23 05:22:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.928840 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.939969 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.950288 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.951569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.951605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.951617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.951634 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.951645 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:55Z","lastTransitionTime":"2026-01-23T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.961524 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.973268 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:55 crc kubenswrapper[4771]: I0123 13:33:55.982679 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.054171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.054217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.054229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.054247 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.054262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.157481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.157528 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.157541 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.157563 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.157576 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.227895 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.227873 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:43:47.917784034 +0000 UTC Jan 23 13:33:56 crc kubenswrapper[4771]: E0123 13:33:56.228039 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.261662 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.261723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.261744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.261776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.261797 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.364465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.364545 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.364570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.364605 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.364629 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.466982 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.467062 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.467072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.467099 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.467114 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.570002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.570048 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.570063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.570082 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.570097 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.672764 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.672833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.672846 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.672865 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.672877 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.759737 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/3.log" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.766031 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:33:56 crc kubenswrapper[4771]: E0123 13:33:56.766211 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.776096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.776153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.776171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.776195 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.776218 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.787401 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.813460 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.831919 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.865827 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:55Z\\\",\\\"message\\\":\\\"led to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z]\\\\nI0123 13:33:55.383729 6795 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-qbvcq\\\\nI0123 13:33:55.383619 6795 services_controller.go:434] Service openshift-route-controller-manager/route-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 754a1504-193a-42d9-b250-5d40bcccc281 4720 0 2025-02-23 05:22:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.879041 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.879086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.879103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.879124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.879141 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.881219 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.897492 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.917740 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.931799 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.944258 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.954472 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.965608 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.978570 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.980994 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.981101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.981124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.981146 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.981164 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:56Z","lastTransitionTime":"2026-01-23T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:56 crc kubenswrapper[4771]: I0123 13:33:56.991220 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.000760 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:56Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.012155 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.027201 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.041089 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.054372 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:57Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.084610 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.084661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.084672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.084693 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.084706 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.186890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.186941 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.186953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.186971 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.186988 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.227438 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.227501 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:57 crc kubenswrapper[4771]: E0123 13:33:57.227599 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.227404 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:57 crc kubenswrapper[4771]: E0123 13:33:57.227848 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:57 crc kubenswrapper[4771]: E0123 13:33:57.227936 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.227983 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:19:56.011061147 +0000 UTC Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.290046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.290101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.290122 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.290149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.290176 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.392820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.392894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.392918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.392945 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.392964 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.496128 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.496171 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.496181 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.496198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.496209 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.598924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.598985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.599003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.599026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.599043 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.701716 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.701766 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.701777 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.701796 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.701841 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.804888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.804944 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.804961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.804984 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.805001 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.907646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.907708 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.907720 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.907743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:57 crc kubenswrapper[4771]: I0123 13:33:57.907756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:57Z","lastTransitionTime":"2026-01-23T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.011307 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.011367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.011386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.011441 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.011462 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.114533 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.114651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.114674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.114707 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.114732 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.218828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.218939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.218959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.218988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.219011 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.228150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:33:58 crc kubenswrapper[4771]: E0123 13:33:58.228370 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.228478 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:31:47.806137259 +0000 UTC Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.322258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.322311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.322331 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.322354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.322371 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.425713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.425773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.425795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.425823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.425846 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.528399 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.528516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.528530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.528553 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.528566 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.630908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.630951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.630962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.630979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.630989 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.734269 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.734322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.734334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.734354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.734365 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.837205 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.837743 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.837799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.837828 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.837877 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.940833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.940892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.940905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.940922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:58 crc kubenswrapper[4771]: I0123 13:33:58.940936 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:58Z","lastTransitionTime":"2026-01-23T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.044438 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.044485 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.044498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.044516 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.044526 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.147231 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.147264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.147274 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.147291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.147302 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.227453 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.227474 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.227502 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:33:59 crc kubenswrapper[4771]: E0123 13:33:59.228010 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:33:59 crc kubenswrapper[4771]: E0123 13:33:59.228112 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:33:59 crc kubenswrapper[4771]: E0123 13:33:59.228185 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.229362 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:06:01.710042239 +0000 UTC Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.242355 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.249490 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.249543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.249560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.249582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.249596 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.259930 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.283264 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.298464 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.316458 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:55Z\\\",\\\"message\\\":\\\"led to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z]\\\\nI0123 13:33:55.383729 6795 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-qbvcq\\\\nI0123 13:33:55.383619 6795 services_controller.go:434] Service openshift-route-controller-manager/route-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 754a1504-193a-42d9-b250-5d40bcccc281 4720 0 2025-02-23 05:22:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.334393 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.350141 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.351645 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.351689 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.351701 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.351719 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.351735 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.368633 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.380574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.393050 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.406194 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.416897 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.426566 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.435649 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.444874 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.454006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.454042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.454054 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.454070 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.454084 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.457263 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.470121 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.481933 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:59Z is after 2025-08-24T17:21:41Z" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.556401 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.556450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.556463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.556481 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.556493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.659009 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.659045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.659053 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.659067 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.659077 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.760888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.760938 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.760947 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.760960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.760969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.864104 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.864156 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.864167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.864184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.864196 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.966867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.966940 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.966964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.966994 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:33:59 crc kubenswrapper[4771]: I0123 13:33:59.967017 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:33:59Z","lastTransitionTime":"2026-01-23T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.069477 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.069530 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.069551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.069572 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.069588 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.172199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.172244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.172255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.172273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.172284 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.228193 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:00 crc kubenswrapper[4771]: E0123 13:34:00.228523 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.230207 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:35:43.577491045 +0000 UTC Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.274402 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.274467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.274476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.274492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.274503 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.377615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.377660 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.377672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.377689 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.377703 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.480903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.480949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.480961 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.480978 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.480989 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.583949 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.584010 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.584029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.584055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.584073 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.687683 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.687729 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.687787 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.687806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.687818 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.790321 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.790362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.790371 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.790394 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.790418 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.893235 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.893665 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.893799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.893878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.893955 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.996774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.997145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.997267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.997450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:00 crc kubenswrapper[4771]: I0123 13:34:00.997613 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:00Z","lastTransitionTime":"2026-01-23T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.100781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.100837 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.100853 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.100874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.100892 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.203824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.203904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.203924 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.203953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.203976 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.217188 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.217346 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.217311419 +0000 UTC m=+146.239849084 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.217517 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.217602 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.217662 4771 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.217728 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.217710022 +0000 UTC m=+146.240247657 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.217787 4771 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.217861 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.217845976 +0000 UTC m=+146.240383651 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.227669 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.227704 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.227844 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.228017 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.228105 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.228362 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.230614 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:23:37.473125663 +0000 UTC Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.306908 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.306965 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.306979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.307000 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.307017 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.318952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.319057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319257 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319289 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319309 4771 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319330 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319381 4771 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319441 4771 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319455 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.319385202 +0000 UTC m=+146.341922867 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.319539 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.319506756 +0000 UTC m=+146.342044431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.410345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.410424 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.410436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.410462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.410477 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.513589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.513654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.513673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.513696 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.513712 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.617160 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.617252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.617275 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.617311 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.617339 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.720375 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.720489 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.720518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.720547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.720569 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.822673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.822720 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.822731 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.822745 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.822760 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.893583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.893642 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.893658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.893681 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.893699 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.915333 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.920222 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.920335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.920374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.920431 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.920459 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.937460 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.942468 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.942531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.942548 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.942574 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.942593 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.965021 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.970891 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.971298 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.971381 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.971499 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.971611 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:01 crc kubenswrapper[4771]: E0123 13:34:01.988563 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:01Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.992670 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.992709 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.992728 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.992748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:01 crc kubenswrapper[4771]: I0123 13:34:01.992764 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:01Z","lastTransitionTime":"2026-01-23T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: E0123 13:34:02.008047 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:02Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:02 crc kubenswrapper[4771]: E0123 13:34:02.008526 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.010374 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.010446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.010465 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.010488 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.010505 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.112690 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.112726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.112734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.112748 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.112756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.215774 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.215821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.215831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.215848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.215858 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.227040 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:02 crc kubenswrapper[4771]: E0123 13:34:02.227189 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.231624 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:33:35.299884958 +0000 UTC Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.318674 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.318714 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.318723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.318739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.318786 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.422033 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.422091 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.422101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.422117 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.422131 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.524833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.524897 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.524910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.524926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.524936 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.627717 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.627767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.627781 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.627800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.627814 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.729628 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.729684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.729693 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.729706 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.729717 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.832771 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.832839 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.832852 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.832868 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.832880 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.936521 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.936577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.936590 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.936608 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:02 crc kubenswrapper[4771]: I0123 13:34:02.936621 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:02Z","lastTransitionTime":"2026-01-23T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.040303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.040345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.040354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.040370 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.040381 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.142546 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.142586 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.142597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.142611 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.142622 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.227937 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.228002 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:03 crc kubenswrapper[4771]: E0123 13:34:03.228075 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:03 crc kubenswrapper[4771]: E0123 13:34:03.228141 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.228607 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:03 crc kubenswrapper[4771]: E0123 13:34:03.228685 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.231706 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:56:26.023677223 +0000 UTC Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.244793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.244850 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.244868 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.244896 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.244915 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.347587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.347712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.347740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.347773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.347796 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.451276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.451349 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.451368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.451397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.451447 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.555382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.555566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.555591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.555625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.555648 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.658597 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.658659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.658679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.658702 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.658721 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.762223 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.762303 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.762316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.762343 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.762357 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.870172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.870251 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.870266 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.870290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.870312 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.974382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.974509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.974536 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.974566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:03 crc kubenswrapper[4771]: I0123 13:34:03.974587 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:03Z","lastTransitionTime":"2026-01-23T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.077581 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.077643 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.077654 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.077673 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.077684 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.180218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.180258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.180270 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.180285 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.180297 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.227228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:04 crc kubenswrapper[4771]: E0123 13:34:04.227399 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.232267 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 02:12:55.763832054 +0000 UTC Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.283154 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.283198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.283206 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.283221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.283234 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.385688 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.385744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.385754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.385776 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.385788 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.488322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.488382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.488391 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.488423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.488435 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.591960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.592006 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.592017 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.592037 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.592054 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.695053 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.695101 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.695113 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.695130 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.695142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.796472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.796506 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.796514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.796529 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.796538 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.899910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.899955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.899965 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.899979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:04 crc kubenswrapper[4771]: I0123 13:34:04.899988 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:04Z","lastTransitionTime":"2026-01-23T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.003242 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.003314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.003328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.003351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.003370 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.106678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.106762 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.106799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.106830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.106852 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.209380 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.209442 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.209451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.209467 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.209477 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.228075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.228122 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.228136 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:05 crc kubenswrapper[4771]: E0123 13:34:05.228243 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:05 crc kubenswrapper[4771]: E0123 13:34:05.228448 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:05 crc kubenswrapper[4771]: E0123 13:34:05.228532 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.233039 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:05:45.683345526 +0000 UTC Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.312338 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.312436 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.312454 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.312475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.312488 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.415469 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.415512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.415523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.415540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.415555 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.518577 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.518640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.518658 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.518684 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.518705 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.620794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.620827 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.620835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.620848 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.620858 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.723668 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.723732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.723754 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.723784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.723806 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.827575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.828010 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.828179 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.828335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.828530 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.931594 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.931998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.932276 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.932543 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:05 crc kubenswrapper[4771]: I0123 13:34:05.932907 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:05Z","lastTransitionTime":"2026-01-23T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.036936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.037016 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.037039 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.037068 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.037091 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.139972 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.140042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.140055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.140076 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.140097 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.227958 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:06 crc kubenswrapper[4771]: E0123 13:34:06.228107 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.234186 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:23:37.030318626 +0000 UTC Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.244893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.244943 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.244958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.244993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.245009 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.347953 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.348025 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.348046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.348073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.348092 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.450896 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.450955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.450963 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.450982 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.450992 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.554363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.554492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.554520 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.554552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.554577 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.657957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.658040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.658058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.658083 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.658101 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.761651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.761700 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.761712 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.761731 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.761748 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.866116 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.866172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.866184 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.866204 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.866219 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.969154 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.969219 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.969237 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.969259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:06 crc kubenswrapper[4771]: I0123 13:34:06.969276 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:06Z","lastTransitionTime":"2026-01-23T13:34:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.072234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.072288 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.072305 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.072332 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.072351 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.175154 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.175221 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.175241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.175278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.175315 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.227857 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.227906 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.227856 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:07 crc kubenswrapper[4771]: E0123 13:34:07.228080 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:07 crc kubenswrapper[4771]: E0123 13:34:07.228253 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:07 crc kubenswrapper[4771]: E0123 13:34:07.228478 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.234855 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:43:11.037946096 +0000 UTC Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.279152 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.279201 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.279212 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.279232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.279245 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.382435 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.382518 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.382542 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.382569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.382594 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.485471 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.485523 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.485535 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.485555 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.485569 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.588153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.588188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.588198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.588211 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.588220 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.691224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.691267 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.691290 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.691312 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.691326 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.794278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.794353 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.794377 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.794446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.794469 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.898115 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.898158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.898167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.898182 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:07 crc kubenswrapper[4771]: I0123 13:34:07.898195 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:07Z","lastTransitionTime":"2026-01-23T13:34:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.001314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.001909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.001932 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.001959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.001979 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.105849 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.105936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.105998 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.106048 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.106132 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.210188 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.210283 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.210337 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.210364 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.210381 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.227111 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:08 crc kubenswrapper[4771]: E0123 13:34:08.227268 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.235145 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:42:14.582230636 +0000 UTC Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.313366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.313432 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.313444 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.313463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.313475 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.416003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.416063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.416075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.416093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.416106 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.518909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.518974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.518997 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.519026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.519053 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.621782 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.621838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.621854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.621874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.621891 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.724322 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.724445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.724456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.724476 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.724487 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.827538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.827599 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.827609 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.827630 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.827649 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.930723 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.930816 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.930836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.930866 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:08 crc kubenswrapper[4771]: I0123 13:34:08.930885 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:08Z","lastTransitionTime":"2026-01-23T13:34:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.034877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.034935 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.034952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.034979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.034997 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.138854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.138917 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.138934 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.138959 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.138978 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.228026 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:09 crc kubenswrapper[4771]: E0123 13:34:09.228181 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.228264 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.228359 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:09 crc kubenswrapper[4771]: E0123 13:34:09.228976 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:09 crc kubenswrapper[4771]: E0123 13:34:09.229117 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.235343 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:04:11.893605209 +0000 UTC Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.241767 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.241835 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.241862 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.241895 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.241933 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.246913 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3576c0a2-8766-440d-9c23-c9f170201b31\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a49d0d6a5f46331b0234812a7f6cd620b852af65196a2949d22069bc0f83ba13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7612d247cfac4dd07c6c0a1f0ed053e83d2e170d3ac66bbb793a9804441faf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://003674632df421f2486bf210eb927577eb29d43d09da079bf2f9338c2a19bb27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fb5f401eba02623a73c4f81f2ce4aa29d525d9f7c32781afb465f31e36849cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.249031 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.267559 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d013865a977096a711b0234fcba966947c64f61b081e61ac36a05fdd9bee8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.283171 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-gnfrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b97569b-da05-4b9b-826f-f4ffc7efb2fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f2c267a7397de1c2918e21e1f7d81cbd0fbb655de2b86aa8376f2cfed191531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-54vbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-gnfrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.299341 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.317614 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.335174 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.344492 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.344552 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.344569 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.344589 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.344606 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.352727 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.366005 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.381388 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5dzz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803fce37-afd3-4ce0-9135-ccb3831e206c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:44Z\\\",\\\"message\\\":\\\"2026-01-23T13:32:58+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860\\\\n2026-01-23T13:32:58+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c0594215-76fb-4991-8ad4-6e5153318860 to /host/opt/cni/bin/\\\\n2026-01-23T13:32:59Z [verbose] multus-daemon started\\\\n2026-01-23T13:32:59Z [verbose] Readiness Indicator file check\\\\n2026-01-23T13:33:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvdz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5dzz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.399574 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700ad9d9-4931-48f1-ba4c-546352bdb749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56ad5d465d0195cf4acd652d0276cf2deab11a26cb90434bfeffdd742a7e2304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2053d07d807e2ef9b6f06f1d51227d59164b04ffb913f4fdf16b5c6fdc415e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a30bf135d10692a4ea0845d1d6df6e43f9a755deac1a52cb88044cd6ef8cb21\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b014c9d31d3c38ba9e0e06ff00c5327f3316905f647c04e018b04ec04685c68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e0d1923bac3d69008be461ba76b16a09d6c91a94bc28bd1a1e112b71d909f31\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2af6cc344259cfdebe2860a15e7c9662b392c180b2c31da1a969e8e81aa9f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd2dade894873670bedf9bfa8b34176240de0b41696418aa38d4cc957accbece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:33:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pklvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-x6dcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.415639 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8e44e1-6639-45d3-927f-347dc88e96c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db71780144c5e61bfb393a9df100ace0e5069bf661f76b1bfde84c68f5d3a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pxjwn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z299d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.444177 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba84e18-6300-433f-98d7-f1a2ddd0073c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T13:33:55Z\\\",\\\"message\\\":\\\"led to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:33:55Z is after 2025-08-24T17:21:41Z]\\\\nI0123 13:33:55.383729 6795 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-qbvcq\\\\nI0123 13:33:55.383619 6795 services_controller.go:434] Service openshift-route-controller-manager/route-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{route-controller-manager openshift-route-controller-manager 754a1504-193a-42d9-b250-5d40bcccc281 4720 0 2025-02-23 05:22:48 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:route-controller-manager] map[operator.openshift.io/spec-hash:a480352ea60c2dcd2b3870bf0c3650528ef9b51aaa3fe6baa1e3711da18fffa3 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:33:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g4bww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qbvcq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.447588 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.447647 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.447666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.447732 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.447750 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.464099 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670d2340-5b79-4ff2-a3e2-8dd3a827de98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0795d3d201499277fbf4fecf01909a97e569c3abd0831645f1254779ba1bf08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d022c0065ae12096ae954ca895c1060b5a69e7155a3704867c867ea30665f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://96a56c69d27ddd655a90a3594901472d01b53179a5a9f204cf374df43918139f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.478618 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-965tw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b412c0bf-8f05-4214-a0a3-90ae1113bb54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65fb203dccd13e182237f62c1e964162d9e389f125f72002cbde23f34daced2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgc5b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-965tw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.498190 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c19e4284c8182f58e9c2add3b370336ea02544247baeadde8de557dd70215bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.515330 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.531494 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81ddaf2d-5008-4aeb-86aa-af7df8d3fb01\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://982721b6f8306647f967322328151de1682f3bb4d1e5ab256ad6e3ca2735884c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37d9944d43b0d333145d8b736257eaf317e86ff41593bb8ac5c6ddc44240db17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:33:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92mvj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lsjsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.551095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.551187 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.551216 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.551251 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.551276 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.553019 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b016d90-c27f-4401-99f4-859f3627e491\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdhjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:33:11Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4vhqn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:09Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.654747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.654807 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.654817 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.654837 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.654849 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.758120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.758226 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.758240 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.758258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.758293 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.860983 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.861024 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.861038 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.861058 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.861073 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.964295 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.964360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.964369 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.964390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:09 crc kubenswrapper[4771]: I0123 13:34:09.964403 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:09Z","lastTransitionTime":"2026-01-23T13:34:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.066977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.067027 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.067040 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.067059 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.067070 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.169799 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.169856 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.169867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.169884 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.169896 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.227615 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:10 crc kubenswrapper[4771]: E0123 13:34:10.227773 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.235884 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:08:37.205765491 +0000 UTC Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.272856 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.272896 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.272904 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.272919 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.272929 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.376350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.376401 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.376445 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.376466 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.376483 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.479802 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.479933 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.479968 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.480002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.480025 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.583094 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.583175 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.583198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.583229 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.583251 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.686898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.687002 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.687023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.687043 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.687060 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.789958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.790029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.790047 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.790072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.790088 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.892995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.893045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.893056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.893073 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.893086 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.995993 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.996044 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.996056 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.996080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:10 crc kubenswrapper[4771]: I0123 13:34:10.996093 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:10Z","lastTransitionTime":"2026-01-23T13:34:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.099309 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.099367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.099383 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.099433 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.099455 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.202278 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.202334 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.202345 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.202360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.202368 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.228052 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.228374 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.228473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:11 crc kubenswrapper[4771]: E0123 13:34:11.228648 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:11 crc kubenswrapper[4771]: E0123 13:34:11.228779 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.228887 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:34:11 crc kubenswrapper[4771]: E0123 13:34:11.229081 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:34:11 crc kubenswrapper[4771]: E0123 13:34:11.229069 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.236110 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:24:58.142650038 +0000 UTC Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.305254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.305330 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.305341 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.305366 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.305380 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.407894 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.407960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.407974 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.407999 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.408015 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.510824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.510878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.510893 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.510916 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.510933 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.614566 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.614666 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.614699 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.614729 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.614755 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.718258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.718313 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.718327 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.718350 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.718365 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.820755 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.820832 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.820855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.820886 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.820908 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.924808 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.924878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.924899 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.924927 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:11 crc kubenswrapper[4771]: I0123 13:34:11.924948 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:11Z","lastTransitionTime":"2026-01-23T13:34:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.028055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.028095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.028103 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.028118 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.028128 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.130388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.130451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.130463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.130480 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.130493 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227264 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227342 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227273 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.227380 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.227790 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.236346 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 09:32:47.981735263 +0000 UTC Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.248284 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.253055 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.253115 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.253127 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.253149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.253165 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.268189 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.273451 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.273512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.273529 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.273554 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.273568 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.291945 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.296198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.296259 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.296271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.296291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.296306 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.312815 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.316388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.316450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.316461 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.316478 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.316490 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.328480 4771 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T13:34:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"1e760c04-36aa-4fe4-b672-fbc6c675c4ad\\\",\\\"systemUUID\\\":\\\"416566bb-ab9b-4758-90c6-c01061b893a8\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:12Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:12 crc kubenswrapper[4771]: E0123 13:34:12.328629 4771 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.330172 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.330208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.330218 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.330234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.330246 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.434079 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.434136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.434144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.434161 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.434170 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.537512 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.537601 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.537618 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.537678 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.537705 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.641556 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.641686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.641713 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.641740 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.641761 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.744185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.744239 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.744252 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.744271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.744284 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.846531 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.846602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.846614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.846632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.846645 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.949544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.949587 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.949599 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.949615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:12 crc kubenswrapper[4771]: I0123 13:34:12.949629 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:12Z","lastTransitionTime":"2026-01-23T13:34:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.051804 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.051878 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.051895 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.051918 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.051937 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.154244 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.154336 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.154363 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.154392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.154465 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.228110 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.228110 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.228110 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:13 crc kubenswrapper[4771]: E0123 13:34:13.228250 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:13 crc kubenswrapper[4771]: E0123 13:34:13.228394 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:13 crc kubenswrapper[4771]: E0123 13:34:13.229988 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.237282 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:39:10.68530715 +0000 UTC Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.257385 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.257447 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.257459 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.257474 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.257486 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.360388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.360456 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.360468 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.360496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.360510 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.463511 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.463591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.463617 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.463649 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.463674 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.566888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.566985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.567014 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.567045 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.567067 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.669886 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.669955 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.669973 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.670003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.670023 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.773003 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.773075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.773085 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.773100 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.773111 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.874971 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.875029 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.875042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.875061 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.875073 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.978739 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.978831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.978861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.978892 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:13 crc kubenswrapper[4771]: I0123 13:34:13.978913 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:13Z","lastTransitionTime":"2026-01-23T13:34:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.083149 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.083246 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.083273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.083310 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.083335 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.185730 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.185812 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.185824 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.185855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.185869 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.227101 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:14 crc kubenswrapper[4771]: E0123 13:34:14.227334 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.238347 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:49:32.364780961 +0000 UTC Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.289189 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.289232 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.289241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.289254 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.289266 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.391868 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.391966 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.391979 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.391995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.392008 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.494744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.494794 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.494806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.494823 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.494835 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.597747 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.597793 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.597803 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.597821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.597833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.700867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.700901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.700910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.700926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.700937 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.802977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.803015 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.803026 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.803042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.803055 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.906726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.906784 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.906800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.906826 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.906840 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:14Z","lastTransitionTime":"2026-01-23T13:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:14 crc kubenswrapper[4771]: I0123 13:34:14.965830 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:14 crc kubenswrapper[4771]: E0123 13:34:14.966041 4771 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:34:14 crc kubenswrapper[4771]: E0123 13:34:14.966163 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs podName:6b016d90-c27f-4401-99f4-859f3627e491 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:18.966131024 +0000 UTC m=+159.988668689 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs") pod "network-metrics-daemon-4vhqn" (UID: "6b016d90-c27f-4401-99f4-859f3627e491") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.010253 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.010324 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.010335 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.010355 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.010369 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.113860 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.113926 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.113936 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.113976 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.113989 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.217035 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.217077 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.217087 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.217102 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.217113 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.227797 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.227858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.227858 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:15 crc kubenswrapper[4771]: E0123 13:34:15.228007 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:15 crc kubenswrapper[4771]: E0123 13:34:15.228162 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:15 crc kubenswrapper[4771]: E0123 13:34:15.228371 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.238576 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:30:40.056029801 +0000 UTC Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.319901 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.319952 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.319964 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.319981 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.319996 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.423583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.423641 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.423656 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.423679 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.423695 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.526751 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.526838 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.526861 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.526883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.526923 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.629613 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.629663 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.629672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.629686 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.629697 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.732397 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.732462 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.732472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.732488 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.732499 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.834886 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.834950 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.834963 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.834985 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.835005 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.939440 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.939496 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.939509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.939525 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:15 crc kubenswrapper[4771]: I0123 13:34:15.939540 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:15Z","lastTransitionTime":"2026-01-23T13:34:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.042560 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.042606 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.042615 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.042630 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.042639 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.145962 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.146034 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.146051 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.146080 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.146097 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.227577 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:16 crc kubenswrapper[4771]: E0123 13:34:16.227742 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.238939 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:57:35.163509922 +0000 UTC Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.249446 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.249514 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.249538 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.249570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.249596 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.352030 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.352075 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.352114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.352138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.352153 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.455857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.455928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.455951 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.455980 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.456005 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.559316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.559375 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.559392 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.559447 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.559483 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.662289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.662362 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.662386 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.662452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.662476 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.766072 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.766141 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.766151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.766167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.766177 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.869509 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.869583 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.869602 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.869626 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.869643 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.972472 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.972540 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.972551 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.972570 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:16 crc kubenswrapper[4771]: I0123 13:34:16.972580 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:16Z","lastTransitionTime":"2026-01-23T13:34:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.075813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.075877 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.075896 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.075922 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.075940 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.179167 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.179272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.179291 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.179316 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.179333 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.227288 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.227323 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.227386 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:17 crc kubenswrapper[4771]: E0123 13:34:17.227544 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:17 crc kubenswrapper[4771]: E0123 13:34:17.227721 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:17 crc kubenswrapper[4771]: E0123 13:34:17.228143 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.239785 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:00:16.638823426 +0000 UTC Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.282795 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.282857 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.282875 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.282898 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.282914 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.386916 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.386988 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.387007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.387033 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.387054 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.490873 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.490928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.490939 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.490960 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.490972 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.593575 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.593625 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.593640 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.593659 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.593675 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.696289 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.696390 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.696449 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.696473 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.696494 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.799124 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.799186 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.799197 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.799213 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.799223 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.901272 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.901328 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.901351 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.901371 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:17 crc kubenswrapper[4771]: I0123 13:34:17.901388 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:17Z","lastTransitionTime":"2026-01-23T13:34:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.004193 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.004224 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.004233 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.004250 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.004261 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.106591 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.106635 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.106646 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.106661 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.106675 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.209737 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.209801 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.209813 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.209831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.209845 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.228061 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:18 crc kubenswrapper[4771]: E0123 13:34:18.228520 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.240931 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:41:18.452997565 +0000 UTC Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.312455 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.312513 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.312524 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.312544 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.312560 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.415207 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.415249 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.415258 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.415271 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.415281 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.517639 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.517694 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.517711 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.517734 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.517756 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.621144 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.621214 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.621234 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.621261 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.621280 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.725946 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.726019 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.726063 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.726114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.726137 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.829198 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.829651 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.829858 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.830078 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.830272 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.934652 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.934726 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.934746 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.934772 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:18 crc kubenswrapper[4771]: I0123 13:34:18.934792 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:18Z","lastTransitionTime":"2026-01-23T13:34:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.037672 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.037722 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.037735 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.037753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.037762 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.141483 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.141547 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.141561 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.141582 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.141608 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.228158 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.228236 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.228172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:19 crc kubenswrapper[4771]: E0123 13:34:19.228326 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:19 crc kubenswrapper[4771]: E0123 13:34:19.228445 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:19 crc kubenswrapper[4771]: E0123 13:34:19.228580 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.241117 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 19:04:39.335637466 +0000 UTC Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.243894 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6183e35f-9a7f-4efd-bae3-3c7b565cc310\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f87b80b8c4ae1f820dd75e41c94269e143a118380c9f1e3bee530b9f42ac03c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18da45bbfdd56cad94403f4770e46e16863b07a85ff180ae167f50eed5b5096d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.244067 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.244120 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.244132 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.244150 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.244163 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.262998 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8e30445-3412-4c78-8100-621a5938da93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:33:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T13:32:56Z\\\",\\\"message\\\":\\\"file observer\\\\nW0123 13:32:56.330691 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 13:32:56.330853 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 13:32:56.332678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-715217831/tls.crt::/tmp/serving-cert-715217831/tls.key\\\\\\\"\\\\nI0123 13:32:56.497863 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 13:32:56.501465 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 13:32:56.501489 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 13:32:56.501509 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 13:32:56.501515 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 13:32:56.506982 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 13:32:56.507005 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507010 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 13:32:56.507014 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 13:32:56.507017 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 13:32:56.507021 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 13:32:56.507024 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 13:32:56.507243 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 13:32:56.509702 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:41Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T13:32:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T13:32:39Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.280495 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da46b07e9cd231010b9a3bd12521075f31aa44d914a014e992ac0dab68bfa7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3c3177aeaa603aee220082bbf8d8affbd6aa30c9297b1fdc57a567f569da9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T13:32:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.299211 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.316823 4771 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T13:32:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T13:34:19Z is after 2025-08-24T17:21:41Z" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.344189 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x6dcn" podStartSLOduration=83.344154914 podStartE2EDuration="1m23.344154914s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.343969198 +0000 UTC m=+100.366506853" watchObservedRunningTime="2026-01-23 13:34:19.344154914 +0000 UTC m=+100.366692539" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.345958 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.345995 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.346007 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.346023 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.346035 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.358215 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podStartSLOduration=83.358192478 podStartE2EDuration="1m23.358192478s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.357902269 +0000 UTC m=+100.380439944" watchObservedRunningTime="2026-01-23 13:34:19.358192478 +0000 UTC m=+100.380730103" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.404790 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.404758646 podStartE2EDuration="1m23.404758646s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.40460061 +0000 UTC m=+100.427138255" watchObservedRunningTime="2026-01-23 13:34:19.404758646 +0000 UTC m=+100.427296281" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.416633 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-965tw" podStartSLOduration=83.416604129 podStartE2EDuration="1m23.416604129s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.416598039 +0000 UTC m=+100.439135684" watchObservedRunningTime="2026-01-23 13:34:19.416604129 +0000 UTC m=+100.439141754" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.447830 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.447874 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.447888 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.447905 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.447929 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.459973 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5dzz5" podStartSLOduration=83.459938731 podStartE2EDuration="1m23.459938731s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.459404363 +0000 UTC m=+100.481942008" watchObservedRunningTime="2026-01-23 13:34:19.459938731 +0000 UTC m=+100.482476356" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.473319 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lsjsp" podStartSLOduration=82.473298093 podStartE2EDuration="1m22.473298093s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.472624932 +0000 UTC m=+100.495162557" watchObservedRunningTime="2026-01-23 13:34:19.473298093 +0000 UTC m=+100.495835738" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.515025 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.515009492 podStartE2EDuration="10.515009492s" podCreationTimestamp="2026-01-23 13:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.513766202 +0000 UTC m=+100.536303857" watchObservedRunningTime="2026-01-23 13:34:19.515009492 +0000 UTC m=+100.537547117" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.528642 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.528610843 podStartE2EDuration="51.528610843s" podCreationTimestamp="2026-01-23 13:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.527470086 +0000 UTC m=+100.550007721" watchObservedRunningTime="2026-01-23 13:34:19.528610843 +0000 UTC m=+100.551148488" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550800 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550836 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550845 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550850 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-gnfrx" podStartSLOduration=83.550836932 podStartE2EDuration="1m23.550836932s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:19.550393708 +0000 UTC m=+100.572931353" watchObservedRunningTime="2026-01-23 13:34:19.550836932 +0000 UTC m=+100.573374577" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.550868 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.652760 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.652822 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.652834 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.652851 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.652874 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.756046 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.756096 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.756107 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.756125 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.756142 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.859676 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.859773 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.859791 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.859821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.859843 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.962339 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.962393 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.962447 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.962475 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:19 crc kubenswrapper[4771]: I0123 13:34:19.962499 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:19Z","lastTransitionTime":"2026-01-23T13:34:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.064854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.064902 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.064914 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.064929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.064941 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.167977 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.168028 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.168042 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.168061 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.168076 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.227129 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:20 crc kubenswrapper[4771]: E0123 13:34:20.227293 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.241523 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:10:16.166254731 +0000 UTC Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.270325 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.270367 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.270382 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.270404 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.270438 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.373360 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.373463 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.373486 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.373515 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.373542 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.477314 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.477388 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.477452 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.477486 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.477513 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.580368 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.580450 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.580498 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.580519 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.580534 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.683151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.683215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.683228 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.683251 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.683268 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.786909 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.786956 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.786967 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.786986 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.786999 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.889775 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.889831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.889840 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.889855 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.889867 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.992744 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.992797 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.992806 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.992821 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:20 crc kubenswrapper[4771]: I0123 13:34:20.992833 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:20Z","lastTransitionTime":"2026-01-23T13:34:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.095556 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.095600 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.095614 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.095632 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.095648 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.198779 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.198829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.198841 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.198859 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.198873 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.227260 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.227381 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.227446 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:21 crc kubenswrapper[4771]: E0123 13:34:21.227434 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:21 crc kubenswrapper[4771]: E0123 13:34:21.227562 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:21 crc kubenswrapper[4771]: E0123 13:34:21.227677 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.242108 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:26:22.822349304 +0000 UTC Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.302036 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.302086 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.302095 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.302114 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.302125 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.404808 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.404872 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.404887 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.404910 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.404924 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.508153 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.508199 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.508208 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.508225 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.508239 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.611081 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.611142 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.611151 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.611165 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.611175 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.713831 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.713867 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.713875 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.713890 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.713902 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.815829 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.815900 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.815915 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.815933 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.815947 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.919093 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.919158 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.919169 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.919185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:21 crc kubenswrapper[4771]: I0123 13:34:21.919196 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:21Z","lastTransitionTime":"2026-01-23T13:34:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.022145 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.022217 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.022230 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.022251 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.022262 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.124753 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.124833 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.124842 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.124864 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.124877 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.227602 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228067 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228112 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228123 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228138 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228149 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: E0123 13:34:22.228223 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.228275 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:34:22 crc kubenswrapper[4771]: E0123 13:34:22.228826 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.242888 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:52:41.841474212 +0000 UTC Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.331820 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.331883 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.331903 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.331928 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.331945 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.435136 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.435185 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.435196 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.435215 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.435228 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.538178 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.538241 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.538255 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.538273 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.538287 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.641354 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.641398 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.641423 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.641439 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.641451 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.669854 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.669929 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.669942 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.669957 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.669969 4771 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T13:34:22Z","lastTransitionTime":"2026-01-23T13:34:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.709120 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw"] Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.709553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.711903 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.712030 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.712043 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.712503 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.747547 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1ccec12-2d97-45cc-906e-e641084df8b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.747960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1ccec12-2d97-45cc-906e-e641084df8b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.748099 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.748217 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ccec12-2d97-45cc-906e-e641084df8b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.748340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.770969 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=37.770946937 podStartE2EDuration="37.770946937s" podCreationTimestamp="2026-01-23 13:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:22.769342086 +0000 UTC m=+103.791879711" watchObservedRunningTime="2026-01-23 13:34:22.770946937 +0000 UTC m=+103.793484562" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.813680 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.81365652 podStartE2EDuration="1m25.81365652s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:22.812254925 +0000 UTC m=+103.834792550" watchObservedRunningTime="2026-01-23 13:34:22.81365652 +0000 UTC m=+103.836194155" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849166 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1ccec12-2d97-45cc-906e-e641084df8b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849480 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ccec12-2d97-45cc-906e-e641084df8b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849664 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1ccec12-2d97-45cc-906e-e641084df8b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.849614 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d1ccec12-2d97-45cc-906e-e641084df8b9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.850108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1ccec12-2d97-45cc-906e-e641084df8b9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.856159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1ccec12-2d97-45cc-906e-e641084df8b9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:22 crc kubenswrapper[4771]: I0123 13:34:22.867964 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1ccec12-2d97-45cc-906e-e641084df8b9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zdlfw\" (UID: \"d1ccec12-2d97-45cc-906e-e641084df8b9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.027986 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.228166 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.228177 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:23 crc kubenswrapper[4771]: E0123 13:34:23.228848 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:23 crc kubenswrapper[4771]: E0123 13:34:23.228962 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.228277 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:23 crc kubenswrapper[4771]: E0123 13:34:23.229207 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.243819 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 02:22:39.412419527 +0000 UTC Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.243942 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.254248 4771 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.858824 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" event={"ID":"d1ccec12-2d97-45cc-906e-e641084df8b9","Type":"ContainerStarted","Data":"3c02757537fb92a2f9acc6036ac5039b3684fd63063b22e5837d8fe805a316c9"} Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.859136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" event={"ID":"d1ccec12-2d97-45cc-906e-e641084df8b9","Type":"ContainerStarted","Data":"f5f7d18c5eccafe4fd23e05efd22602ecd1b1b4ac4f46e7e1d16ce7f934fcb3d"} Jan 23 13:34:23 crc kubenswrapper[4771]: I0123 13:34:23.875572 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zdlfw" podStartSLOduration=87.875556978 podStartE2EDuration="1m27.875556978s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:23.874900686 +0000 UTC m=+104.897438341" watchObservedRunningTime="2026-01-23 13:34:23.875556978 +0000 UTC m=+104.898094593" Jan 23 13:34:24 crc kubenswrapper[4771]: I0123 13:34:24.227672 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:24 crc kubenswrapper[4771]: E0123 13:34:24.227850 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:25 crc kubenswrapper[4771]: I0123 13:34:25.227295 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:25 crc kubenswrapper[4771]: I0123 13:34:25.227388 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:25 crc kubenswrapper[4771]: I0123 13:34:25.227433 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:25 crc kubenswrapper[4771]: E0123 13:34:25.227639 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:25 crc kubenswrapper[4771]: E0123 13:34:25.227455 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:25 crc kubenswrapper[4771]: E0123 13:34:25.227840 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:26 crc kubenswrapper[4771]: I0123 13:34:26.227553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:26 crc kubenswrapper[4771]: E0123 13:34:26.227731 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:27 crc kubenswrapper[4771]: I0123 13:34:27.227676 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:27 crc kubenswrapper[4771]: I0123 13:34:27.227806 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:27 crc kubenswrapper[4771]: E0123 13:34:27.227905 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:27 crc kubenswrapper[4771]: I0123 13:34:27.227932 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:27 crc kubenswrapper[4771]: E0123 13:34:27.228160 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:27 crc kubenswrapper[4771]: E0123 13:34:27.228268 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:28 crc kubenswrapper[4771]: I0123 13:34:28.227196 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:28 crc kubenswrapper[4771]: E0123 13:34:28.227883 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:29 crc kubenswrapper[4771]: I0123 13:34:29.227513 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:29 crc kubenswrapper[4771]: E0123 13:34:29.228538 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:29 crc kubenswrapper[4771]: I0123 13:34:29.228564 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:29 crc kubenswrapper[4771]: E0123 13:34:29.228858 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:29 crc kubenswrapper[4771]: I0123 13:34:29.228592 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:29 crc kubenswrapper[4771]: E0123 13:34:29.229061 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.227943 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:30 crc kubenswrapper[4771]: E0123 13:34:30.228078 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.885582 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/1.log" Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.886626 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/0.log" Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.886706 4771 generic.go:334] "Generic (PLEG): container finished" podID="803fce37-afd3-4ce0-9135-ccb3831e206c" containerID="a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615" exitCode=1 Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.886760 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerDied","Data":"a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615"} Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.886806 4771 scope.go:117] "RemoveContainer" containerID="e22b1e85ff538e90b97f4634df784d33818e4fed49986f7c489b5f207bcf94a4" Jan 23 13:34:30 crc kubenswrapper[4771]: I0123 13:34:30.887490 4771 scope.go:117] "RemoveContainer" containerID="a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615" Jan 23 13:34:30 crc kubenswrapper[4771]: E0123 13:34:30.887953 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5dzz5_openshift-multus(803fce37-afd3-4ce0-9135-ccb3831e206c)\"" pod="openshift-multus/multus-5dzz5" podUID="803fce37-afd3-4ce0-9135-ccb3831e206c" Jan 23 13:34:31 crc kubenswrapper[4771]: I0123 13:34:31.227949 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:31 crc kubenswrapper[4771]: I0123 13:34:31.227985 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:31 crc kubenswrapper[4771]: I0123 13:34:31.228053 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:31 crc kubenswrapper[4771]: E0123 13:34:31.228217 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:31 crc kubenswrapper[4771]: E0123 13:34:31.228665 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:31 crc kubenswrapper[4771]: E0123 13:34:31.228769 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:31 crc kubenswrapper[4771]: I0123 13:34:31.891215 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/1.log" Jan 23 13:34:32 crc kubenswrapper[4771]: I0123 13:34:32.227381 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:32 crc kubenswrapper[4771]: E0123 13:34:32.227639 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:33 crc kubenswrapper[4771]: I0123 13:34:33.227734 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:33 crc kubenswrapper[4771]: I0123 13:34:33.227839 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:33 crc kubenswrapper[4771]: E0123 13:34:33.228767 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:33 crc kubenswrapper[4771]: I0123 13:34:33.227866 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:33 crc kubenswrapper[4771]: E0123 13:34:33.229077 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:33 crc kubenswrapper[4771]: I0123 13:34:33.229111 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:34:33 crc kubenswrapper[4771]: E0123 13:34:33.229233 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:33 crc kubenswrapper[4771]: E0123 13:34:33.229461 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qbvcq_openshift-ovn-kubernetes(4ba84e18-6300-433f-98d7-f1a2ddd0073c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" Jan 23 13:34:34 crc kubenswrapper[4771]: I0123 13:34:34.227994 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:34 crc kubenswrapper[4771]: E0123 13:34:34.228185 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:35 crc kubenswrapper[4771]: I0123 13:34:35.227838 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:35 crc kubenswrapper[4771]: E0123 13:34:35.227949 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:35 crc kubenswrapper[4771]: I0123 13:34:35.228048 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:35 crc kubenswrapper[4771]: I0123 13:34:35.228068 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:35 crc kubenswrapper[4771]: E0123 13:34:35.228227 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:35 crc kubenswrapper[4771]: E0123 13:34:35.228259 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:36 crc kubenswrapper[4771]: I0123 13:34:36.227086 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:36 crc kubenswrapper[4771]: E0123 13:34:36.227280 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:37 crc kubenswrapper[4771]: I0123 13:34:37.227701 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:37 crc kubenswrapper[4771]: I0123 13:34:37.227761 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:37 crc kubenswrapper[4771]: I0123 13:34:37.227810 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:37 crc kubenswrapper[4771]: E0123 13:34:37.227935 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:37 crc kubenswrapper[4771]: E0123 13:34:37.228186 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:37 crc kubenswrapper[4771]: E0123 13:34:37.228500 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:38 crc kubenswrapper[4771]: I0123 13:34:38.227433 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:38 crc kubenswrapper[4771]: E0123 13:34:38.227565 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:39 crc kubenswrapper[4771]: I0123 13:34:39.227248 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:39 crc kubenswrapper[4771]: I0123 13:34:39.227379 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:39 crc kubenswrapper[4771]: I0123 13:34:39.227458 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:39 crc kubenswrapper[4771]: E0123 13:34:39.229961 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:39 crc kubenswrapper[4771]: E0123 13:34:39.230005 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:39 crc kubenswrapper[4771]: E0123 13:34:39.230057 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:39 crc kubenswrapper[4771]: E0123 13:34:39.235627 4771 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 13:34:39 crc kubenswrapper[4771]: E0123 13:34:39.331978 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 13:34:40 crc kubenswrapper[4771]: I0123 13:34:40.227994 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:40 crc kubenswrapper[4771]: E0123 13:34:40.228216 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:41 crc kubenswrapper[4771]: I0123 13:34:41.228145 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:41 crc kubenswrapper[4771]: I0123 13:34:41.228276 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:41 crc kubenswrapper[4771]: I0123 13:34:41.228150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:41 crc kubenswrapper[4771]: E0123 13:34:41.228349 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:41 crc kubenswrapper[4771]: E0123 13:34:41.228536 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:41 crc kubenswrapper[4771]: E0123 13:34:41.228626 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:42 crc kubenswrapper[4771]: I0123 13:34:42.227377 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:42 crc kubenswrapper[4771]: E0123 13:34:42.227549 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:43 crc kubenswrapper[4771]: I0123 13:34:43.227577 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:43 crc kubenswrapper[4771]: E0123 13:34:43.227821 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:43 crc kubenswrapper[4771]: I0123 13:34:43.227615 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:43 crc kubenswrapper[4771]: I0123 13:34:43.227578 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:43 crc kubenswrapper[4771]: E0123 13:34:43.228000 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:43 crc kubenswrapper[4771]: E0123 13:34:43.228179 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:44 crc kubenswrapper[4771]: I0123 13:34:44.227686 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:44 crc kubenswrapper[4771]: E0123 13:34:44.227828 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:44 crc kubenswrapper[4771]: E0123 13:34:44.333767 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.227772 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.227852 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:45 crc kubenswrapper[4771]: E0123 13:34:45.227943 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.228012 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:45 crc kubenswrapper[4771]: E0123 13:34:45.228135 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:45 crc kubenswrapper[4771]: E0123 13:34:45.228339 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.228683 4771 scope.go:117] "RemoveContainer" containerID="a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.946026 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/1.log" Jan 23 13:34:45 crc kubenswrapper[4771]: I0123 13:34:45.946599 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerStarted","Data":"28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5"} Jan 23 13:34:46 crc kubenswrapper[4771]: I0123 13:34:46.227738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:46 crc kubenswrapper[4771]: E0123 13:34:46.227922 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.228072 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.228102 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.228102 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:47 crc kubenswrapper[4771]: E0123 13:34:47.228388 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:47 crc kubenswrapper[4771]: E0123 13:34:47.228642 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:47 crc kubenswrapper[4771]: E0123 13:34:47.228760 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.229462 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.953619 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/3.log" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.956301 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerStarted","Data":"c2154643b4cc41a9aa58b5a1db17f5fca6204c67bf8fb95a4bd7c8a2dc0276c0"} Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.957001 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:34:47 crc kubenswrapper[4771]: I0123 13:34:47.986510 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podStartSLOduration=111.98648994 podStartE2EDuration="1m51.98648994s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:34:47.986058857 +0000 UTC m=+129.008596502" watchObservedRunningTime="2026-01-23 13:34:47.98648994 +0000 UTC m=+129.009027605" Jan 23 13:34:48 crc kubenswrapper[4771]: I0123 13:34:47.998626 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4vhqn"] Jan 23 13:34:48 crc kubenswrapper[4771]: I0123 13:34:47.998774 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:48 crc kubenswrapper[4771]: E0123 13:34:47.998905 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:48 crc kubenswrapper[4771]: I0123 13:34:48.228140 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:48 crc kubenswrapper[4771]: E0123 13:34:48.228289 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:49 crc kubenswrapper[4771]: I0123 13:34:49.228114 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:49 crc kubenswrapper[4771]: E0123 13:34:49.230350 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:49 crc kubenswrapper[4771]: I0123 13:34:49.230661 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:49 crc kubenswrapper[4771]: E0123 13:34:49.230846 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:49 crc kubenswrapper[4771]: E0123 13:34:49.334719 4771 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 13:34:50 crc kubenswrapper[4771]: I0123 13:34:50.227820 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:50 crc kubenswrapper[4771]: I0123 13:34:50.227906 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:50 crc kubenswrapper[4771]: E0123 13:34:50.227975 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:50 crc kubenswrapper[4771]: E0123 13:34:50.228078 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:51 crc kubenswrapper[4771]: I0123 13:34:51.227646 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:51 crc kubenswrapper[4771]: I0123 13:34:51.227663 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:51 crc kubenswrapper[4771]: E0123 13:34:51.227838 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:51 crc kubenswrapper[4771]: E0123 13:34:51.227940 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:52 crc kubenswrapper[4771]: I0123 13:34:52.227804 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:52 crc kubenswrapper[4771]: I0123 13:34:52.227935 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:52 crc kubenswrapper[4771]: E0123 13:34:52.227969 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:52 crc kubenswrapper[4771]: E0123 13:34:52.228138 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:53 crc kubenswrapper[4771]: I0123 13:34:53.227683 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:53 crc kubenswrapper[4771]: I0123 13:34:53.227779 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:53 crc kubenswrapper[4771]: E0123 13:34:53.227967 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 13:34:53 crc kubenswrapper[4771]: E0123 13:34:53.228131 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 13:34:54 crc kubenswrapper[4771]: I0123 13:34:54.227540 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:54 crc kubenswrapper[4771]: I0123 13:34:54.227761 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:54 crc kubenswrapper[4771]: E0123 13:34:54.227949 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4vhqn" podUID="6b016d90-c27f-4401-99f4-859f3627e491" Jan 23 13:34:54 crc kubenswrapper[4771]: E0123 13:34:54.228141 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.228135 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.228282 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.232359 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.232484 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.233244 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.233619 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 13:34:55 crc kubenswrapper[4771]: I0123 13:34:55.327321 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:34:56 crc kubenswrapper[4771]: I0123 13:34:56.227288 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:34:56 crc kubenswrapper[4771]: I0123 13:34:56.227468 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:34:56 crc kubenswrapper[4771]: I0123 13:34:56.230441 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 13:34:56 crc kubenswrapper[4771]: I0123 13:34:56.230780 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 13:35:00 crc kubenswrapper[4771]: I0123 13:35:00.311956 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:35:00 crc kubenswrapper[4771]: I0123 13:35:00.312027 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.763344 4771 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.805793 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.806640 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.806651 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gftf6"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.807499 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.808636 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.809214 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.810275 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.810794 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.810883 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811271 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811775 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-serving-cert\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811842 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811879 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-auth-proxy-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811932 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.811980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-serving-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812026 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812106 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit-dir\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6186c03c-22f6-4047-910f-6e2259a75960-machine-approver-tls\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812389 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812434 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-node-pullsecrets\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812458 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxxqh\" (UniqueName: \"kubernetes.io/projected/6186c03c-22f6-4047-910f-6e2259a75960-kube-api-access-pxxqh\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812507 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812578 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lqvf\" (UniqueName: \"kubernetes.io/projected/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-kube-api-access-9lqvf\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812676 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgh9\" (UniqueName: \"kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812703 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-client\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812726 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-image-import-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-encryption-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.812857 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c59wl"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.813533 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.815186 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.815737 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.815897 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.815888 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.816017 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.816527 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.817542 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z5t5f"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.818189 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.819441 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.819904 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.822991 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.824361 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.826962 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s9r77"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.827058 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.827792 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-578qn"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.828338 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.828385 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.828459 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.829043 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.831034 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prp7p"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.831355 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.831914 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.832063 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.832391 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.834234 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.834906 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.835005 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.835178 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.835284 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.835564 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.836628 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.836782 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.837119 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.837946 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.842983 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843426 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843456 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843433 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843539 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843431 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.843444 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.850847 4771 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.850921 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.852158 4771 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.852226 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.852291 4771 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.852309 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.852775 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.852958 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.853009 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.852969 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.858519 4771 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.858618 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.860085 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.861323 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.865757 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.867761 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.869939 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.870171 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.872192 4771 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.872277 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.872547 4771 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.872575 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.872865 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.873154 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.873664 4771 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.880183 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.880181 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.873721 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.887908 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.888133 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.888237 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.888426 4771 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.888466 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.889191 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: W0123 13:35:03.889308 4771 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 23 13:35:03 crc kubenswrapper[4771]: E0123 13:35:03.889334 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.892166 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.892390 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.893880 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.894358 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.894666 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hgksm"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.894783 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.894939 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.894962 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.895822 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.895957 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.895955 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896013 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896337 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896507 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896109 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896161 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896190 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.896205 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897284 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897442 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897357 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897549 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897640 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897718 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897807 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897823 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897830 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.898023 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897902 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.897937 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.898242 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.898490 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.898881 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.899027 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.899172 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.900290 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.900452 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.900664 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.900981 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.901002 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.901444 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d67nm"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.901769 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.901947 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.903321 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.904770 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.905923 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.915264 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.920125 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.920148 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.920346 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.920455 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.920951 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5k7z5"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.921293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.921332 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxd4\" (UniqueName: \"kubernetes.io/projected/28667350-f72e-42d9-92d3-1e45074aa44c-kube-api-access-8fxd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.921353 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmwh\" (UniqueName: \"kubernetes.io/projected/b270566d-61fd-4698-bacd-22dd3f26ba3e-kube-api-access-9kmwh\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.921372 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.922101 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.923825 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924004 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.921374 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924174 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-config\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924196 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75tm8\" (UniqueName: \"kubernetes.io/projected/ef981f89-01c0-438a-a1b3-1f0e18d3496e-kube-api-access-75tm8\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924230 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6186c03c-22f6-4047-910f-6e2259a75960-machine-approver-tls\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gnt\" (UniqueName: \"kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924333 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b837e5a7-79f3-431e-ad7b-bd979aa81b41-serving-cert\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924361 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-service-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924382 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-client\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.924399 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.925568 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.925786 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.925939 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926331 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926879 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926950 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.926985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjwt\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-kube-api-access-spjwt\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28667350-f72e-42d9-92d3-1e45074aa44c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927016 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927038 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-node-pullsecrets\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxxqh\" (UniqueName: \"kubernetes.io/projected/6186c03c-22f6-4047-910f-6e2259a75960-kube-api-access-pxxqh\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927110 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28667350-f72e-42d9-92d3-1e45074aa44c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927145 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-encryption-config\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927164 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b270566d-61fd-4698-bacd-22dd3f26ba3e-serving-cert\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927184 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xpr6\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-kube-api-access-6xpr6\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927201 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927221 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-metrics-tls\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927263 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw62\" (UniqueName: \"kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927309 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24de4abf-93cb-4fa1-8a90-2249d475ca57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927327 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-config\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927355 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lqvf\" (UniqueName: \"kubernetes.io/projected/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-kube-api-access-9lqvf\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8km\" (UniqueName: \"kubernetes.io/projected/8d937404-443a-4d0c-ab8c-4d61cebc4b18-kube-api-access-fw8km\") pod \"downloads-7954f5f757-hgksm\" (UID: \"8d937404-443a-4d0c-ab8c-4d61cebc4b18\") " pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgh9\" (UniqueName: \"kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-client\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927478 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-image-import-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927495 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-client\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927510 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927529 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927548 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927569 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kk7l\" (UniqueName: \"kubernetes.io/projected/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-kube-api-access-7kk7l\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927593 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927608 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927629 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-encryption-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927644 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-serving-cert\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927664 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927680 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-trusted-ca\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927697 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef981f89-01c0-438a-a1b3-1f0e18d3496e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927717 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-serving-cert\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927738 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lblq8\" (UniqueName: \"kubernetes.io/projected/24de4abf-93cb-4fa1-8a90-2249d475ca57-kube-api-access-lblq8\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927759 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927776 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-auth-proxy-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927793 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcnnj\" (UniqueName: \"kubernetes.io/projected/b837e5a7-79f3-431e-ad7b-bd979aa81b41-kube-api-access-fcnnj\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927879 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wlc\" (UniqueName: \"kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927918 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-serving-cert\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24de4abf-93cb-4fa1-8a90-2249d475ca57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-config\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.927985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928019 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928051 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-serving-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928069 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928086 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dwm\" (UniqueName: \"kubernetes.io/projected/c9a559ae-d103-4979-bb70-6fb0a326f4b5-kube-api-access-r4dwm\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928137 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-service-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-trusted-ca\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928189 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928203 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-kube-api-access-j59wl\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928253 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928269 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit-dir\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928286 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928304 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-dir\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928323 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-serving-cert\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928339 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-policies\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-node-pullsecrets\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.928545 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.943661 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.950841 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6186c03c-22f6-4047-910f-6e2259a75960-machine-approver-tls\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.950933 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.952593 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c9dr9"] Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.954281 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.957652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit-dir\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.958709 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-auth-proxy-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.960003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6186c03c-22f6-4047-910f-6e2259a75960-config\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.960010 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-audit\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.962860 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.992975 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.993834 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.994233 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.994369 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.996136 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.996580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.996675 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-serving-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.998811 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.999377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-image-import-ca\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:03 crc kubenswrapper[4771]: I0123 13:35:03.999488 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.001068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-encryption-config\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.001534 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-serving-cert\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.005869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-etcd-client\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.006075 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.007103 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.009942 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.010486 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.010552 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.011336 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.012152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.012494 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.013397 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.014732 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.015864 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.017222 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wzpft"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.018147 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.018618 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.019269 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.019970 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.020448 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.021834 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.022369 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.023787 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.024749 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.024889 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.025480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.025680 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.026433 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.027002 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m58rh"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.027977 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.028553 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029058 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029315 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmwh\" (UniqueName: \"kubernetes.io/projected/b270566d-61fd-4698-bacd-22dd3f26ba3e-kube-api-access-9kmwh\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029344 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029377 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-config\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029399 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75tm8\" (UniqueName: \"kubernetes.io/projected/ef981f89-01c0-438a-a1b3-1f0e18d3496e-kube-api-access-75tm8\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029482 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029530 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gnt\" (UniqueName: \"kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b837e5a7-79f3-431e-ad7b-bd979aa81b41-serving-cert\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029570 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-service-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029605 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-client\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029623 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029639 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029659 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029692 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029711 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjwt\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-kube-api-access-spjwt\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029732 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28667350-f72e-42d9-92d3-1e45074aa44c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029772 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-encryption-config\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029852 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b270566d-61fd-4698-bacd-22dd3f26ba3e-serving-cert\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029877 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28667350-f72e-42d9-92d3-1e45074aa44c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029901 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xpr6\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-kube-api-access-6xpr6\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029967 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-metrics-tls\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.029998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030025 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghdh7\" (UniqueName: \"kubernetes.io/projected/125049fc-2ad3-4834-929b-58894ab55ec7-kube-api-access-ghdh7\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030073 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlw62\" (UniqueName: \"kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24de4abf-93cb-4fa1-8a90-2249d475ca57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-config\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8km\" (UniqueName: \"kubernetes.io/projected/8d937404-443a-4d0c-ab8c-4d61cebc4b18-kube-api-access-fw8km\") pod \"downloads-7954f5f757-hgksm\" (UID: \"8d937404-443a-4d0c-ab8c-4d61cebc4b18\") " pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030198 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030223 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-client\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030272 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030300 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kk7l\" (UniqueName: \"kubernetes.io/projected/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-kube-api-access-7kk7l\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030326 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-serving-cert\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030511 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-trusted-ca\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef981f89-01c0-438a-a1b3-1f0e18d3496e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030662 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lblq8\" (UniqueName: \"kubernetes.io/projected/24de4abf-93cb-4fa1-8a90-2249d475ca57-kube-api-access-lblq8\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030688 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcnnj\" (UniqueName: \"kubernetes.io/projected/b837e5a7-79f3-431e-ad7b-bd979aa81b41-kube-api-access-fcnnj\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030712 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030765 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6wlc\" (UniqueName: \"kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030815 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030851 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24de4abf-93cb-4fa1-8a90-2249d475ca57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030899 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-serving-cert\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-config\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.030945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031024 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031061 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031085 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031108 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4dwm\" (UniqueName: \"kubernetes.io/projected/c9a559ae-d103-4979-bb70-6fb0a326f4b5-kube-api-access-r4dwm\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031196 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-service-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031263 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-kube-api-access-j59wl\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031286 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-trusted-ca\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-dir\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031368 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-serving-cert\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-policies\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031432 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/125049fc-2ad3-4834-929b-58894ab55ec7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxd4\" (UniqueName: \"kubernetes.io/projected/28667350-f72e-42d9-92d3-1e45074aa44c-kube-api-access-8fxd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031539 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031901 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.032137 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.032993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24de4abf-93cb-4fa1-8a90-2249d475ca57-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.033219 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.033722 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.035473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.035907 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-dir\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.032705 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-config\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.036350 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.036567 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.036973 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-audit-policies\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.036991 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.037091 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b837e5a7-79f3-431e-ad7b-bd979aa81b41-serving-cert\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.037370 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-service-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.031169 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.037427 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.038279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-etcd-client\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.038287 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-trusted-ca\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.038287 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.038831 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.039733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.039918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b837e5a7-79f3-431e-ad7b-bd979aa81b41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.040327 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.040454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28667350-f72e-42d9-92d3-1e45074aa44c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.040563 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.040682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.040966 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.041396 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b270566d-61fd-4698-bacd-22dd3f26ba3e-config\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.042037 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28667350-f72e-42d9-92d3-1e45074aa44c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.042782 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.043221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ef981f89-01c0-438a-a1b3-1f0e18d3496e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.043899 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.045347 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.045883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24de4abf-93cb-4fa1-8a90-2249d475ca57-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.046005 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.046327 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.046333 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.047227 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b270566d-61fd-4698-bacd-22dd3f26ba3e-serving-cert\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.047750 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.047942 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-encryption-config\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.048544 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.048742 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.048866 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-serving-cert\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.048910 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.049281 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.049394 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.050308 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.050940 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.052727 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z5t5f"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.053113 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s9r77"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.053505 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-serving-cert\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.054037 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gftf6"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.055534 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c59wl"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.057314 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.061594 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-578qn"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.061717 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.067158 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.071812 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.072478 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.077818 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d67nm"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.079789 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.080986 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-serving-cert\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.082474 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prp7p"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.086468 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c9dr9"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.087561 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.089047 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.089276 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.090427 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.091695 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2nhhg"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.093333 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.094459 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wzpft"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.095847 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.097100 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hgksm"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.097201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-client\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.098537 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.099680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.101005 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.102258 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m58rh"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.103542 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.104870 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.106515 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.107555 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.108775 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lc7gv"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.109295 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.111020 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hh6f4"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.111154 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.112338 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.112496 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.113032 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.114290 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-config\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.114452 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2nhhg"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.115704 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.116955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.118189 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.119836 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.121649 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.122864 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lc7gv"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.125046 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7qwqx"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.125636 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.126911 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7qwqx"] Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.129060 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.131451 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.132194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/125049fc-2ad3-4834-929b-58894ab55ec7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.132325 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghdh7\" (UniqueName: \"kubernetes.io/projected/125049fc-2ad3-4834-929b-58894ab55ec7-kube-api-access-ghdh7\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.135257 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/125049fc-2ad3-4834-929b-58894ab55ec7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.149053 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.153060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c9a559ae-d103-4979-bb70-6fb0a326f4b5-etcd-service-ca\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.168848 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.189980 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.210903 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.222883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-metrics-tls\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.229217 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.257991 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.269555 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.269660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-trusted-ca\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.310126 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.328847 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.349561 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.369673 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.389341 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.408597 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.429199 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.449965 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.470285 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.489508 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.510565 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.544122 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxxqh\" (UniqueName: \"kubernetes.io/projected/6186c03c-22f6-4047-910f-6e2259a75960-kube-api-access-pxxqh\") pod \"machine-approver-56656f9798-tbsms\" (UID: \"6186c03c-22f6-4047-910f-6e2259a75960\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.570620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lqvf\" (UniqueName: \"kubernetes.io/projected/b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9-kube-api-access-9lqvf\") pod \"apiserver-76f77b778f-gftf6\" (UID: \"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9\") " pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.589168 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.595714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgh9\" (UniqueName: \"kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9\") pod \"controller-manager-879f6c89f-xgb8j\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.609052 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.628674 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.649392 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.690095 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.709537 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.730380 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.736988 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.748028 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.750233 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.770263 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.789590 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.798131 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.810279 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 13:35:04 crc kubenswrapper[4771]: W0123 13:35:04.854318 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6186c03c_22f6_4047_910f_6e2259a75960.slice/crio-8bdd5cf251155d8424c915331960dd3c75216837d1384ce7502da687e82f09ab WatchSource:0}: Error finding container 8bdd5cf251155d8424c915331960dd3c75216837d1384ce7502da687e82f09ab: Status 404 returned error can't find the container with id 8bdd5cf251155d8424c915331960dd3c75216837d1384ce7502da687e82f09ab Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.854331 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.854451 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.868394 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.891446 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.910364 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.929539 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.949283 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.969218 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 13:35:04 crc kubenswrapper[4771]: I0123 13:35:04.990361 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.000153 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.009875 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 13:35:05 crc kubenswrapper[4771]: W0123 13:35:05.015063 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9feed86a_3d92_4b4b_81aa_57ddf242e7ed.slice/crio-4e2eb5dc02cd700725b2a800f43e73c89cce2bf2de6a6e55a522bdd12fdfa8f8 WatchSource:0}: Error finding container 4e2eb5dc02cd700725b2a800f43e73c89cce2bf2de6a6e55a522bdd12fdfa8f8: Status 404 returned error can't find the container with id 4e2eb5dc02cd700725b2a800f43e73c89cce2bf2de6a6e55a522bdd12fdfa8f8 Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.018612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" event={"ID":"6186c03c-22f6-4047-910f-6e2259a75960","Type":"ContainerStarted","Data":"8bdd5cf251155d8424c915331960dd3c75216837d1384ce7502da687e82f09ab"} Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.027748 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gftf6"] Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.027785 4771 request.go:700] Waited for 1.008250572s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.029939 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033253 4771 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033272 4771 secret.go:188] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033319 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config podName:ef981f89-01c0-438a-a1b3-1f0e18d3496e nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.53329995 +0000 UTC m=+146.555837575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config") pod "machine-api-operator-5694c8668f-z5t5f" (UID: "ef981f89-01c0-438a-a1b3-1f0e18d3496e") : failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033350 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert podName:a88dbdcd-6064-4186-8edd-16341379ef97 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.53332798 +0000 UTC m=+146.555865605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert") pod "route-controller-manager-6576b87f9c-skbsz" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97") : failed to sync secret cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033396 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.033445 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca podName:a88dbdcd-6064-4186-8edd-16341379ef97 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.533438564 +0000 UTC m=+146.555976189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca") pod "route-controller-manager-6576b87f9c-skbsz" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97") : failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.038343 4771 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.038449 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images podName:ef981f89-01c0-438a-a1b3-1f0e18d3496e nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.538432366 +0000 UTC m=+146.560969981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images") pod "machine-api-operator-5694c8668f-z5t5f" (UID: "ef981f89-01c0-438a-a1b3-1f0e18d3496e") : failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.038474 4771 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.038517 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config podName:a88dbdcd-6064-4186-8edd-16341379ef97 nodeName:}" failed. No retries permitted until 2026-01-23 13:35:05.538507439 +0000 UTC m=+146.561045064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config") pod "route-controller-manager-6576b87f9c-skbsz" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97") : failed to sync configmap cache: timed out waiting for the condition Jan 23 13:35:05 crc kubenswrapper[4771]: W0123 13:35:05.039178 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ac54a4_888f_4c81_b7eb_5b5ee0cce5b9.slice/crio-531f35629a4d6f9278f32994329918e7944367ffa5eae2ae9a673a9004c1ded8 WatchSource:0}: Error finding container 531f35629a4d6f9278f32994329918e7944367ffa5eae2ae9a673a9004c1ded8: Status 404 returned error can't find the container with id 531f35629a4d6f9278f32994329918e7944367ffa5eae2ae9a673a9004c1ded8 Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.048910 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.069870 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.091227 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.108947 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.129268 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.149541 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.170161 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.189064 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.209277 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.230036 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.251549 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.258112 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.258388 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.258619 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:35:05 crc kubenswrapper[4771]: E0123 13:35:05.258925 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:37:07.25890168 +0000 UTC m=+268.281439305 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.259655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.269687 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.272095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.290123 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.310151 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.329460 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.349493 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.361466 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.361934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.367131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.368865 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.369702 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.388934 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.409323 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.429682 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.446152 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.454796 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.455051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.470645 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.496113 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.509724 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.531002 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.547574 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.565002 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.565093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.565170 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.565198 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.565251 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.575821 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmwh\" (UniqueName: \"kubernetes.io/projected/b270566d-61fd-4698-bacd-22dd3f26ba3e-kube-api-access-9kmwh\") pod \"console-operator-58897d9998-578qn\" (UID: \"b270566d-61fd-4698-bacd-22dd3f26ba3e\") " pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.588312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxd4\" (UniqueName: \"kubernetes.io/projected/28667350-f72e-42d9-92d3-1e45074aa44c-kube-api-access-8fxd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-2cntv\" (UID: \"28667350-f72e-42d9-92d3-1e45074aa44c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.606590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lblq8\" (UniqueName: \"kubernetes.io/projected/24de4abf-93cb-4fa1-8a90-2249d475ca57-kube-api-access-lblq8\") pod \"openshift-apiserver-operator-796bbdcf4f-zk9hn\" (UID: \"24de4abf-93cb-4fa1-8a90-2249d475ca57\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.647211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjwt\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-kube-api-access-spjwt\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.663235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gnt\" (UniqueName: \"kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt\") pod \"console-f9d7485db-84f77\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:05 crc kubenswrapper[4771]: W0123 13:35:05.679346 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-d5292ff8fde6c57310900b168ae46bc203a0d571417de8aebf590ef270ca368b WatchSource:0}: Error finding container d5292ff8fde6c57310900b168ae46bc203a0d571417de8aebf590ef270ca368b: Status 404 returned error can't find the container with id d5292ff8fde6c57310900b168ae46bc203a0d571417de8aebf590ef270ca368b Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.685191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75tm8\" (UniqueName: \"kubernetes.io/projected/ef981f89-01c0-438a-a1b3-1f0e18d3496e-kube-api-access-75tm8\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.717806 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8km\" (UniqueName: \"kubernetes.io/projected/8d937404-443a-4d0c-ab8c-4d61cebc4b18-kube-api-access-fw8km\") pod \"downloads-7954f5f757-hgksm\" (UID: \"8d937404-443a-4d0c-ab8c-4d61cebc4b18\") " pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.735965 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.741857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcnnj\" (UniqueName: \"kubernetes.io/projected/b837e5a7-79f3-431e-ad7b-bd979aa81b41-kube-api-access-fcnnj\") pod \"authentication-operator-69f744f599-s9r77\" (UID: \"b837e5a7-79f3-431e-ad7b-bd979aa81b41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.752287 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.794132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea9c810d-0ac0-4528-98a7-a3b349a28a9e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-95nnz\" (UID: \"ea9c810d-0ac0-4528-98a7-a3b349a28a9e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.811243 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59wl\" (UniqueName: \"kubernetes.io/projected/7616383c-6e4c-4ae6-8fe8-f573ab4cdad9-kube-api-access-j59wl\") pod \"openshift-config-operator-7777fb866f-prp7p\" (UID: \"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.819721 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.831682 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.839043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4dwm\" (UniqueName: \"kubernetes.io/projected/c9a559ae-d103-4979-bb70-6fb0a326f4b5-kube-api-access-r4dwm\") pod \"etcd-operator-b45778765-d67nm\" (UID: \"c9a559ae-d103-4979-bb70-6fb0a326f4b5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.839397 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.839532 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.876965 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.883138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kk7l\" (UniqueName: \"kubernetes.io/projected/5e4fb568-7f2c-48cf-8cb8-7888b24016d1-kube-api-access-7kk7l\") pod \"apiserver-7bbb656c7d-sgjq5\" (UID: \"5e4fb568-7f2c-48cf-8cb8-7888b24016d1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.891660 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.902703 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.903651 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.920894 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.921329 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.924492 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.929750 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.933334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.957656 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6wlc\" (UniqueName: \"kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc\") pod \"oauth-openshift-558db77b4-c59wl\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:05 crc kubenswrapper[4771]: I0123 13:35:05.969395 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:05.993634 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:05.995361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xpr6\" (UniqueName: \"kubernetes.io/projected/b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a-kube-api-access-6xpr6\") pod \"cluster-image-registry-operator-dc59b4c8b-8d7hk\" (UID: \"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.016100 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.018092 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.032897 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.047748 4771 request.go:700] Waited for 1.953913499s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.052721 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.056922 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9e675dfff4450de52bed882e44056e6e535d2fd074188d986e27b65fe65aa202"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.069597 4771 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.076818 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"29ab89367891986b76f9603cd72287ba3dd68b3b7f031662613b798b514b0c76"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.076879 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d5292ff8fde6c57310900b168ae46bc203a0d571417de8aebf590ef270ca368b"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.082301 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" event={"ID":"9feed86a-3d92-4b4b-81aa-57ddf242e7ed","Type":"ContainerStarted","Data":"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.082380 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" event={"ID":"9feed86a-3d92-4b4b-81aa-57ddf242e7ed","Type":"ContainerStarted","Data":"4e2eb5dc02cd700725b2a800f43e73c89cce2bf2de6a6e55a522bdd12fdfa8f8"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.082839 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.089044 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" event={"ID":"6186c03c-22f6-4047-910f-6e2259a75960","Type":"ContainerStarted","Data":"9f5a24be0bc7a9369246d96c18966899f8c3e9b84896d73ff0613df8bd160c8a"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.089104 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" event={"ID":"6186c03c-22f6-4047-910f-6e2259a75960","Type":"ContainerStarted","Data":"af77f72be0e659de204adb4f1404c529d9f388a5fd40c9f659e06d99ba4ab4a7"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.089872 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.090203 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.102792 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"adef0ae9e6256f3c40961956fd9425ac46b2b5353535c7903571d31c073c9c13"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.102851 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"54a3790f7ab68e621eecd79a86714f7245ffbded0fed2c8ecae4146bc3fb5934"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.115926 4771 generic.go:334] "Generic (PLEG): container finished" podID="b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9" containerID="9238366ec31fa42722f6a6c978c5e8ea196b83fb0ed3babd82a69939d83d98f7" exitCode=0 Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.115982 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" event={"ID":"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9","Type":"ContainerDied","Data":"9238366ec31fa42722f6a6c978c5e8ea196b83fb0ed3babd82a69939d83d98f7"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.116018 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" event={"ID":"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9","Type":"ContainerStarted","Data":"531f35629a4d6f9278f32994329918e7944367ffa5eae2ae9a673a9004c1ded8"} Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.120740 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.131829 4771 csr.go:261] certificate signing request csr-s66gs is approved, waiting to be issued Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.132146 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.140811 4771 csr.go:257] certificate signing request csr-s66gs is issued Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.152573 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.161634 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.170037 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.192455 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.208222 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.214106 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.229550 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.256383 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.270208 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.306891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghdh7\" (UniqueName: \"kubernetes.io/projected/125049fc-2ad3-4834-929b-58894ab55ec7-kube-api-access-ghdh7\") pod \"cluster-samples-operator-665b6dd947-z6bkl\" (UID: \"125049fc-2ad3-4834-929b-58894ab55ec7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322721 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322775 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322866 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322898 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322931 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322959 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.322991 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7gw8\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.324143 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-s9r77"] Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.324523 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:06.824505467 +0000 UTC m=+147.847043092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.339278 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.349077 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.363630 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.370350 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.397241 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.412032 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.412368 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.417338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-config\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.424865 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425356 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425375 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425396 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e00edbb-9068-41bf-b0af-d9a37af2880e-config\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425456 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e00edbb-9068-41bf-b0af-d9a37af2880e-serving-cert\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425476 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-cabundle\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7gs\" (UniqueName: \"kubernetes.io/projected/da1ca945-ccee-4468-8941-13ce9115dc6d-kube-api-access-lr7gs\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425695 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425760 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26f3377f-65d2-4673-bab4-ad00eb946a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.425914 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cc8f639-4d97-4b72-9453-c3d5ede2b322-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.426010 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d4bbd63-09df-418e-8ced-81942892cc71-proxy-tls\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.426033 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-srv-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.426057 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60374146-a25c-42d9-82d8-dcad9368144c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7gw8\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434508 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnp9\" (UniqueName: \"kubernetes.io/projected/111f88b6-b7d7-4f59-9448-78697734f048-kube-api-access-9xnp9\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434542 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk5dn\" (UniqueName: \"kubernetes.io/projected/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-kube-api-access-lk5dn\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434589 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-metrics-certs\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434624 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb331896-5a8e-466c-a152-ef8b744289d2-cert\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8wf\" (UniqueName: \"kubernetes.io/projected/2e00edbb-9068-41bf-b0af-d9a37af2880e-kube-api-access-jv8wf\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434719 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-config-volume\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.434768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435124 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkgh6\" (UniqueName: \"kubernetes.io/projected/3cc8f639-4d97-4b72-9453-c3d5ede2b322-kube-api-access-zkgh6\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435217 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/111f88b6-b7d7-4f59-9448-78697734f048-metrics-tls\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435250 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dh4\" (UniqueName: \"kubernetes.io/projected/1d4bbd63-09df-418e-8ced-81942892cc71-kube-api-access-b6dh4\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-default-certificate\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.435346 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvcft\" (UniqueName: \"kubernetes.io/projected/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-kube-api-access-mvcft\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.435478 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:06.935446357 +0000 UTC m=+147.957984172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.438357 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.439925 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.440914 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bb7d1-205b-4625-980e-3636bb66f8bc-config\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.440987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441053 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-key\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441095 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdzn\" (UniqueName: \"kubernetes.io/projected/60374146-a25c-42d9-82d8-dcad9368144c-kube-api-access-lrdzn\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxxc\" (UniqueName: \"kubernetes.io/projected/f6a46658-8c32-45d8-bf00-1cf0ba747194-kube-api-access-5fxxc\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441194 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-stats-auth\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441238 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczdz\" (UniqueName: \"kubernetes.io/projected/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-kube-api-access-xczdz\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441326 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eaa22e2-97b3-4e61-835d-c9c293d1c515-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441687 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-registration-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkw52\" (UniqueName: \"kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441779 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f3377f-65d2-4673-bab4-ad00eb946a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441812 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-metrics-tls\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441836 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxj49\" (UniqueName: \"kubernetes.io/projected/aad75c66-50a7-4716-8e83-1335b20d6d07-kube-api-access-vxj49\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.441950 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442027 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442085 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-certs\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442192 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3cc8f639-4d97-4b72-9453-c3d5ede2b322-proxy-tls\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442593 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442618 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-mountpoint-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpztq\" (UniqueName: \"kubernetes.io/projected/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-kube-api-access-zpztq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442857 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.442991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443019 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67vk\" (UniqueName: \"kubernetes.io/projected/9466ce77-2c46-4094-8388-4c99168b2792-kube-api-access-t67vk\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443224 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-plugins-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443270 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-csi-data-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/251bb7d1-205b-4625-980e-3636bb66f8bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443603 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g9jx\" (UniqueName: \"kubernetes.io/projected/bb331896-5a8e-466c-a152-ef8b744289d2-kube-api-access-5g9jx\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z594\" (UniqueName: \"kubernetes.io/projected/218b450b-13f3-42fe-b5e4-c51ca704b015-kube-api-access-5z594\") pod \"migrator-59844c95c7-8v9hg\" (UID: \"218b450b-13f3-42fe-b5e4-c51ca704b015\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443751 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmjw\" (UniqueName: \"kubernetes.io/projected/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-kube-api-access-htmjw\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443778 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-service-ca-bundle\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.443967 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.444209 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466ce77-2c46-4094-8388-4c99168b2792-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.444548 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:06.94452739 +0000 UTC m=+147.967065015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.445190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9872w\" (UniqueName: \"kubernetes.io/projected/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-kube-api-access-9872w\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.446643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-images\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.446750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/251bb7d1-205b-4625-980e-3636bb66f8bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.447667 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.447740 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-node-bootstrap-token\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nxs\" (UniqueName: \"kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448046 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448087 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f3377f-65d2-4673-bab4-ad00eb946a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448111 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448130 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da1ca945-ccee-4468-8941-13ce9115dc6d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448200 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaa22e2-97b3-4e61-835d-c9c293d1c515-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.448249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaa22e2-97b3-4e61-835d-c9c293d1c515-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.449479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.449667 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shqz\" (UniqueName: \"kubernetes.io/projected/7afe73b6-e696-4a30-90b6-8ac66d83fe51-kube-api-access-6shqz\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.449720 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.451039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-srv-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.451089 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-socket-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.451110 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-tmpfs\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.451137 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-webhook-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.455877 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.456117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.466878 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.481995 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.482330 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.492647 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.496282 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ef981f89-01c0-438a-a1b3-1f0e18d3496e-images\") pod \"machine-api-operator-5694c8668f-z5t5f\" (UID: \"ef981f89-01c0-438a-a1b3-1f0e18d3496e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.498121 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlw62\" (UniqueName: \"kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62\") pod \"route-controller-manager-6576b87f9c-skbsz\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.508446 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-578qn"] Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.564988 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.064955736 +0000 UTC m=+148.087493361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565016 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-node-bootstrap-token\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f3377f-65d2-4673-bab4-ad00eb946a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565266 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nxs\" (UniqueName: \"kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565307 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da1ca945-ccee-4468-8941-13ce9115dc6d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565326 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaa22e2-97b3-4e61-835d-c9c293d1c515-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565342 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaa22e2-97b3-4e61-835d-c9c293d1c515-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565357 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565372 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shqz\" (UniqueName: \"kubernetes.io/projected/7afe73b6-e696-4a30-90b6-8ac66d83fe51-kube-api-access-6shqz\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-srv-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565422 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-socket-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-tmpfs\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565452 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-webhook-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565471 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565504 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e00edbb-9068-41bf-b0af-d9a37af2880e-config\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e00edbb-9068-41bf-b0af-d9a37af2880e-serving-cert\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565535 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-cabundle\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565555 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7gs\" (UniqueName: \"kubernetes.io/projected/da1ca945-ccee-4468-8941-13ce9115dc6d-kube-api-access-lr7gs\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565596 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26f3377f-65d2-4673-bab4-ad00eb946a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cc8f639-4d97-4b72-9453-c3d5ede2b322-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565627 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d4bbd63-09df-418e-8ced-81942892cc71-proxy-tls\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565643 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-srv-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565666 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60374146-a25c-42d9-82d8-dcad9368144c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565688 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnp9\" (UniqueName: \"kubernetes.io/projected/111f88b6-b7d7-4f59-9448-78697734f048-kube-api-access-9xnp9\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk5dn\" (UniqueName: \"kubernetes.io/projected/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-kube-api-access-lk5dn\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-metrics-certs\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb331896-5a8e-466c-a152-ef8b744289d2-cert\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565754 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-config-volume\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8wf\" (UniqueName: \"kubernetes.io/projected/2e00edbb-9068-41bf-b0af-d9a37af2880e-kube-api-access-jv8wf\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkgh6\" (UniqueName: \"kubernetes.io/projected/3cc8f639-4d97-4b72-9453-c3d5ede2b322-kube-api-access-zkgh6\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565813 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/111f88b6-b7d7-4f59-9448-78697734f048-metrics-tls\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dh4\" (UniqueName: \"kubernetes.io/projected/1d4bbd63-09df-418e-8ced-81942892cc71-kube-api-access-b6dh4\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565849 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-default-certificate\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565866 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvcft\" (UniqueName: \"kubernetes.io/projected/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-kube-api-access-mvcft\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bb7d1-205b-4625-980e-3636bb66f8bc-config\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565900 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-key\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565917 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-stats-auth\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdzn\" (UniqueName: \"kubernetes.io/projected/60374146-a25c-42d9-82d8-dcad9368144c-kube-api-access-lrdzn\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fxxc\" (UniqueName: \"kubernetes.io/projected/f6a46658-8c32-45d8-bf00-1cf0ba747194-kube-api-access-5fxxc\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565965 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xczdz\" (UniqueName: \"kubernetes.io/projected/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-kube-api-access-xczdz\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eaa22e2-97b3-4e61-835d-c9c293d1c515-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.565996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-registration-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566013 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkw52\" (UniqueName: \"kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f3377f-65d2-4673-bab4-ad00eb946a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566043 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-metrics-tls\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxj49\" (UniqueName: \"kubernetes.io/projected/aad75c66-50a7-4716-8e83-1335b20d6d07-kube-api-access-vxj49\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566071 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566087 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-certs\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3cc8f639-4d97-4b72-9453-c3d5ede2b322-proxy-tls\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566119 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566134 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-mountpoint-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566152 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpztq\" (UniqueName: \"kubernetes.io/projected/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-kube-api-access-zpztq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566169 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566185 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67vk\" (UniqueName: \"kubernetes.io/projected/9466ce77-2c46-4094-8388-4c99168b2792-kube-api-access-t67vk\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566241 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-plugins-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566256 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-csi-data-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566271 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/251bb7d1-205b-4625-980e-3636bb66f8bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g9jx\" (UniqueName: \"kubernetes.io/projected/bb331896-5a8e-466c-a152-ef8b744289d2-kube-api-access-5g9jx\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566306 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z594\" (UniqueName: \"kubernetes.io/projected/218b450b-13f3-42fe-b5e4-c51ca704b015-kube-api-access-5z594\") pod \"migrator-59844c95c7-8v9hg\" (UID: \"218b450b-13f3-42fe-b5e4-c51ca704b015\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htmjw\" (UniqueName: \"kubernetes.io/projected/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-kube-api-access-htmjw\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566336 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-service-ca-bundle\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566354 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466ce77-2c46-4094-8388-4c99168b2792-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566373 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9872w\" (UniqueName: \"kubernetes.io/projected/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-kube-api-access-9872w\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/251bb7d1-205b-4625-980e-3636bb66f8bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-images\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.566534 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.567287 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.568222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-mountpoint-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.569606 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-registration-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.571361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-plugins-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.571732 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.071714606 +0000 UTC m=+148.094252411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.571896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-csi-data-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.577491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bb7d1-205b-4625-980e-3636bb66f8bc-config\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.580267 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-tmpfs\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.581314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-images\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.581519 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e00edbb-9068-41bf-b0af-d9a37af2880e-config\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.581689 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-default-certificate\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.581782 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-socket-dir\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.582145 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.582268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d4bbd63-09df-418e-8ced-81942892cc71-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.583552 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.585423 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-config-volume\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.592648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-cabundle\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.593693 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/251bb7d1-205b-4625-980e-3636bb66f8bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.593983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3cc8f639-4d97-4b72-9453-c3d5ede2b322-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.596602 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.598108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3cc8f639-4d97-4b72-9453-c3d5ede2b322-proxy-tls\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.600718 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26f3377f-65d2-4673-bab4-ad00eb946a4d-config\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.602281 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb331896-5a8e-466c-a152-ef8b744289d2-cert\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.611642 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aad75c66-50a7-4716-8e83-1335b20d6d07-srv-cert\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.618979 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.621258 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-metrics-certs\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.622532 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-metrics-tls\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.622773 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6a46658-8c32-45d8-bf00-1cf0ba747194-signing-key\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.622939 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-apiservice-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-stats-auth\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623397 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-node-bootstrap-token\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-webhook-cert\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623658 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d4bbd63-09df-418e-8ced-81942892cc71-proxy-tls\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.623928 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e00edbb-9068-41bf-b0af-d9a37af2880e-serving-cert\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.624046 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60374146-a25c-42d9-82d8-dcad9368144c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.624754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/111f88b6-b7d7-4f59-9448-78697734f048-metrics-tls\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.626559 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-certs\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.627315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.627825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-service-ca-bundle\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.634614 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvcft\" (UniqueName: \"kubernetes.io/projected/cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019-kube-api-access-mvcft\") pod \"csi-hostpathplugin-lc7gv\" (UID: \"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019\") " pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.634660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7gw8\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.635240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eaa22e2-97b3-4e61-835d-c9c293d1c515-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.641649 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.643962 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eaa22e2-97b3-4e61-835d-c9c293d1c515-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.645014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-srv-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.645974 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9466ce77-2c46-4094-8388-4c99168b2792-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.649062 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eaa22e2-97b3-4e61-835d-c9c293d1c515-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sj8wv\" (UID: \"7eaa22e2-97b3-4e61-835d-c9c293d1c515\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.649971 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26f3377f-65d2-4673-bab4-ad00eb946a4d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.665957 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.667341 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/da1ca945-ccee-4468-8941-13ce9115dc6d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.667770 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.668211 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.168173916 +0000 UTC m=+148.190711541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.668546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.669065 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.169055135 +0000 UTC m=+148.191592760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.681207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7afe73b6-e696-4a30-90b6-8ac66d83fe51-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.687857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxj49\" (UniqueName: \"kubernetes.io/projected/aad75c66-50a7-4716-8e83-1335b20d6d07-kube-api-access-vxj49\") pod \"olm-operator-6b444d44fb-rkjjk\" (UID: \"aad75c66-50a7-4716-8e83-1335b20d6d07\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.700131 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.703265 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fxxc\" (UniqueName: \"kubernetes.io/projected/f6a46658-8c32-45d8-bf00-1cf0ba747194-kube-api-access-5fxxc\") pod \"service-ca-9c57cc56f-m58rh\" (UID: \"f6a46658-8c32-45d8-bf00-1cf0ba747194\") " pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.710953 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpztq\" (UniqueName: \"kubernetes.io/projected/bedfc0c0-0c8c-4c3f-8561-e5c7f969f578-kube-api-access-zpztq\") pod \"kube-storage-version-migrator-operator-b67b599dd-8xvh2\" (UID: \"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.729322 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdzn\" (UniqueName: \"kubernetes.io/projected/60374146-a25c-42d9-82d8-dcad9368144c-kube-api-access-lrdzn\") pod \"control-plane-machine-set-operator-78cbb6b69f-g658k\" (UID: \"60374146-a25c-42d9-82d8-dcad9368144c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.734052 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c59wl"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.743141 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.745450 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkw52\" (UniqueName: \"kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52\") pod \"marketplace-operator-79b997595-fbmxq\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.771935 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.772475 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.27245556 +0000 UTC m=+148.294993185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.775737 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.789276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczdz\" (UniqueName: \"kubernetes.io/projected/800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c-kube-api-access-xczdz\") pod \"machine-config-server-hh6f4\" (UID: \"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c\") " pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:06 crc kubenswrapper[4771]: W0123 13:35:06.789448 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48f25d01_9b0c_4851_9f6b_4a49fc631e4c.slice/crio-49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67 WatchSource:0}: Error finding container 49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67: Status 404 returned error can't find the container with id 49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67 Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.793211 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g9jx\" (UniqueName: \"kubernetes.io/projected/bb331896-5a8e-466c-a152-ef8b744289d2-kube-api-access-5g9jx\") pod \"ingress-canary-7qwqx\" (UID: \"bb331896-5a8e-466c-a152-ef8b744289d2\") " pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.812222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67vk\" (UniqueName: \"kubernetes.io/projected/9466ce77-2c46-4094-8388-4c99168b2792-kube-api-access-t67vk\") pod \"multus-admission-controller-857f4d67dd-wzpft\" (UID: \"9466ce77-2c46-4094-8388-4c99168b2792\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.824980 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.826843 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/251bb7d1-205b-4625-980e-3636bb66f8bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8r5lw\" (UID: \"251bb7d1-205b-4625-980e-3636bb66f8bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.842111 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.846524 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.860851 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.866869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z594\" (UniqueName: \"kubernetes.io/projected/218b450b-13f3-42fe-b5e4-c51ca704b015-kube-api-access-5z594\") pod \"migrator-59844c95c7-8v9hg\" (UID: \"218b450b-13f3-42fe-b5e4-c51ca704b015\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.867644 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.876653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.877013 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.376999333 +0000 UTC m=+148.399536958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.882897 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.883850 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.886530 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.889571 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9872w\" (UniqueName: \"kubernetes.io/projected/67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e-kube-api-access-9872w\") pod \"router-default-5444994796-5k7z5\" (UID: \"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e\") " pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.891156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htmjw\" (UniqueName: \"kubernetes.io/projected/8e7e312a-b17a-497c-b14a-11bdecbe5d0c-kube-api-access-htmjw\") pod \"dns-default-2nhhg\" (UID: \"8e7e312a-b17a-497c-b14a-11bdecbe5d0c\") " pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.892056 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nxs\" (UniqueName: \"kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs\") pod \"collect-profiles-29486250-89rvd\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.895765 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-prp7p"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.898172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.922000 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hgksm"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.935019 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.950770 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d67nm"] Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.957580 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.961036 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.961552 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:06 crc kubenswrapper[4771]: I0123 13:35:06.977391 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:06 crc kubenswrapper[4771]: E0123 13:35:06.978277 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.478257079 +0000 UTC m=+148.500794704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:06.993208 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:06.997558 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dh4\" (UniqueName: \"kubernetes.io/projected/1d4bbd63-09df-418e-8ced-81942892cc71-kube-api-access-b6dh4\") pod \"machine-config-operator-74547568cd-rlk5w\" (UID: \"1d4bbd63-09df-418e-8ced-81942892cc71\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:06.998649 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.002196 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7gs\" (UniqueName: \"kubernetes.io/projected/da1ca945-ccee-4468-8941-13ce9115dc6d-kube-api-access-lr7gs\") pod \"package-server-manager-789f6589d5-xcs2g\" (UID: \"da1ca945-ccee-4468-8941-13ce9115dc6d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.007461 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.008069 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkgh6\" (UniqueName: \"kubernetes.io/projected/3cc8f639-4d97-4b72-9453-c3d5ede2b322-kube-api-access-zkgh6\") pod \"machine-config-controller-84d6567774-hrl92\" (UID: \"3cc8f639-4d97-4b72-9453-c3d5ede2b322\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.012460 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk5dn\" (UniqueName: \"kubernetes.io/projected/d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6-kube-api-access-lk5dn\") pod \"packageserver-d55dfcdfc-sg5d9\" (UID: \"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.020555 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8wf\" (UniqueName: \"kubernetes.io/projected/2e00edbb-9068-41bf-b0af-d9a37af2880e-kube-api-access-jv8wf\") pod \"service-ca-operator-777779d784-8tfhc\" (UID: \"2e00edbb-9068-41bf-b0af-d9a37af2880e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.047556 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hh6f4" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.059640 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7qwqx" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.065756 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26f3377f-65d2-4673-bab4-ad00eb946a4d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6l4kn\" (UID: \"26f3377f-65d2-4673-bab4-ad00eb946a4d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.082735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.082928 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnp9\" (UniqueName: \"kubernetes.io/projected/111f88b6-b7d7-4f59-9448-78697734f048-kube-api-access-9xnp9\") pod \"dns-operator-744455d44c-c9dr9\" (UID: \"111f88b6-b7d7-4f59-9448-78697734f048\") " pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.083136 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.583120642 +0000 UTC m=+148.605658267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.097487 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shqz\" (UniqueName: \"kubernetes.io/projected/7afe73b6-e696-4a30-90b6-8ac66d83fe51-kube-api-access-6shqz\") pod \"catalog-operator-68c6474976-fjhrj\" (UID: \"7afe73b6-e696-4a30-90b6-8ac66d83fe51\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.139692 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.144749 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 13:30:06 +0000 UTC, rotation deadline is 2026-10-13 04:28:18.042954708 +0000 UTC Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.144832 4771 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6302h53m10.89812642s for next certificate rotation Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.156747 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" event={"ID":"5e4fb568-7f2c-48cf-8cb8-7888b24016d1","Type":"ContainerStarted","Data":"06e793089b02bb5ff3b84f2bacf25527d37194f619a0cef0da3401153ebe854e"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.161780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.173479 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f7ba59c9839fd8ee98b8156a0dcef91ac847a89c2832d94a4fde794c721a1d4c"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.174774 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.181599 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.182071 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z5t5f"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.183561 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.184843 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.684827553 +0000 UTC m=+148.707365178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.188905 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-578qn" event={"ID":"b270566d-61fd-4698-bacd-22dd3f26ba3e","Type":"ContainerStarted","Data":"547e95601463d2050bd5f4f928c7b62984254757af9619aa68688e2301f363d9"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.188959 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-578qn" event={"ID":"b270566d-61fd-4698-bacd-22dd3f26ba3e","Type":"ContainerStarted","Data":"e8cea56ed9a4780f1a83c973d9252fd038c4aec019a109b7f61c8ef13d2365af"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.191009 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.195653 4771 patch_prober.go:28] interesting pod/console-operator-58897d9998-578qn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.195722 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-578qn" podUID="b270566d-61fd-4698-bacd-22dd3f26ba3e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.203127 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.207496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" event={"ID":"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9","Type":"ContainerStarted","Data":"a0f493656631ee4e1f454e705503ff75678a3fb14f8e540d6d5ac2b93a567542"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.211450 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.222535 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" event={"ID":"b837e5a7-79f3-431e-ad7b-bd979aa81b41","Type":"ContainerStarted","Data":"79159a7eb6f5d04e9a17d008d85b1eeaf549094ea95dfb9d3bc929b15b33e46d"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.222586 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" event={"ID":"b837e5a7-79f3-431e-ad7b-bd979aa81b41","Type":"ContainerStarted","Data":"5aa4a61935805c5d80ba3858892348e3dd9b172297bb567148ecec17836ccae7"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.225031 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.240515 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.273949 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.279708 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.286716 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.287072 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.78705621 +0000 UTC m=+148.809593835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.287914 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" event={"ID":"ea9c810d-0ac0-4528-98a7-a3b349a28a9e","Type":"ContainerStarted","Data":"cfa316c511ea1f8741cfa96bdd7bfdb971bd5fe7e83258b25ab956bb2be2408a"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.287957 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.288946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" event={"ID":"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9","Type":"ContainerStarted","Data":"6926afb073b573576a5b4e9b31b8df574f291f463b8308d82ee18f209c133fa3"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.291936 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" event={"ID":"48f25d01-9b0c-4851-9f6b-4a49fc631e4c","Type":"ContainerStarted","Data":"49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.296985 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hgksm" event={"ID":"8d937404-443a-4d0c-ab8c-4d61cebc4b18","Type":"ContainerStarted","Data":"087537cada0fb4eb56bb9fa460b29114d6b3578fbf28f4504b2103f5e3dee885"} Jan 23 13:35:07 crc kubenswrapper[4771]: W0123 13:35:07.298538 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef981f89_01c0_438a_a1b3_1f0e18d3496e.slice/crio-19634f03c84c8216d7ccce444d27e7f367d3e00f15f2ef1cb70e349f86d1e480 WatchSource:0}: Error finding container 19634f03c84c8216d7ccce444d27e7f367d3e00f15f2ef1cb70e349f86d1e480: Status 404 returned error can't find the container with id 19634f03c84c8216d7ccce444d27e7f367d3e00f15f2ef1cb70e349f86d1e480 Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.299439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-84f77" event={"ID":"6c1e299b-6a89-4d9c-87ff-e2937d66487d","Type":"ContainerStarted","Data":"aeca68a8ccb77467525b55fc39d2a4667bbc683e12b0c1ee0b3629d88e52323f"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.308510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" event={"ID":"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a","Type":"ContainerStarted","Data":"db4ced2fd547cd926f5fbd843feff26c3303509c5d87f7ae378bfedeec54e752"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.346829 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-lc7gv"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.374601 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" event={"ID":"24de4abf-93cb-4fa1-8a90-2249d475ca57","Type":"ContainerStarted","Data":"80a1ed42364df2783ce10883aaef18513e6e40b575422cc19d38ef4e0ce6f816"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.374672 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" event={"ID":"24de4abf-93cb-4fa1-8a90-2249d475ca57","Type":"ContainerStarted","Data":"bbe5420bf18d65dc3da411f8e918a0278d8376d534ac87c288d29e256be7945d"} Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.407972 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.409156 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:07.9091312 +0000 UTC m=+148.931668825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.496538 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.509240 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.509660 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.009644362 +0000 UTC m=+149.032181987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.610730 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.611163 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.111146547 +0000 UTC m=+149.133684172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.719960 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.720331 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.220317279 +0000 UTC m=+149.242854904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.743973 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" podStartSLOduration=131.743944643 podStartE2EDuration="2m11.743944643s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:07.741905957 +0000 UTC m=+148.764443582" watchObservedRunningTime="2026-01-23 13:35:07.743944643 +0000 UTC m=+148.766482268" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.812441 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.820748 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.821065 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.321047658 +0000 UTC m=+149.343585283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.829862 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wzpft"] Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.876556 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk9hn" podStartSLOduration=131.876538673 podStartE2EDuration="2m11.876538673s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:07.839716272 +0000 UTC m=+148.862253917" watchObservedRunningTime="2026-01-23 13:35:07.876538673 +0000 UTC m=+148.899076298" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.877618 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-578qn" podStartSLOduration=131.877611607 podStartE2EDuration="2m11.877611607s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:07.875364525 +0000 UTC m=+148.897902150" watchObservedRunningTime="2026-01-23 13:35:07.877611607 +0000 UTC m=+148.900149232" Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.925580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:07 crc kubenswrapper[4771]: E0123 13:35:07.926279 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.426258712 +0000 UTC m=+149.448796527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:07 crc kubenswrapper[4771]: I0123 13:35:07.951620 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:07.994872 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.027971 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.028253 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.528237942 +0000 UTC m=+149.550775567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.129195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.129571 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.629554599 +0000 UTC m=+149.652092224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.219910 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.230015 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.230596 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.730579858 +0000 UTC m=+149.753117493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: W0123 13:35:08.297773 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9466ce77_2c46_4094_8388_4c99168b2792.slice/crio-5a31d189085930198e91ba067eebb3bc4ed8f7fc17515ac2165ec49c63ab4d48 WatchSource:0}: Error finding container 5a31d189085930198e91ba067eebb3bc4ed8f7fc17515ac2165ec49c63ab4d48: Status 404 returned error can't find the container with id 5a31d189085930198e91ba067eebb3bc4ed8f7fc17515ac2165ec49c63ab4d48 Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.315603 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.332956 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.333266 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.83325469 +0000 UTC m=+149.855792315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.398146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" event={"ID":"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019","Type":"ContainerStarted","Data":"f1e9442be549f0acb1713aaf7a03ded6ab9c4269994e405a1a00a574bc6f3983"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.402519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" event={"ID":"ef981f89-01c0-438a-a1b3-1f0e18d3496e","Type":"ContainerStarted","Data":"19634f03c84c8216d7ccce444d27e7f367d3e00f15f2ef1cb70e349f86d1e480"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.404835 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-84f77" event={"ID":"6c1e299b-6a89-4d9c-87ff-e2937d66487d","Type":"ContainerStarted","Data":"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.406031 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" event={"ID":"c9a559ae-d103-4979-bb70-6fb0a326f4b5","Type":"ContainerStarted","Data":"0f61cd9bd0f43ee7ee99589168108c5cd1a7f1c7ef4739d890d31e19f9f9640c"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.413139 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5k7z5" event={"ID":"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e","Type":"ContainerStarted","Data":"9cf3a42affb57fbc15cf32d86e9cd6a14699d6b85a28fe06d0f94619a995aabf"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.438970 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.439445 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:08.939373574 +0000 UTC m=+149.961911199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.448206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" event={"ID":"28667350-f72e-42d9-92d3-1e45074aa44c","Type":"ContainerStarted","Data":"297d9d09857be84d7432383922a59db0bd98daa20ccc0f4c057540ebc4c08346"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.503977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" event={"ID":"9466ce77-2c46-4094-8388-4c99168b2792","Type":"ContainerStarted","Data":"5a31d189085930198e91ba067eebb3bc4ed8f7fc17515ac2165ec49c63ab4d48"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.525218 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" event={"ID":"251bb7d1-205b-4625-980e-3636bb66f8bc","Type":"ContainerStarted","Data":"8c18739a134a422f1106be2eae65e8f2cf2273ee2d49d585ce7c0a0bcd94eb8b"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.541646 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tbsms" podStartSLOduration=132.541620321 podStartE2EDuration="2m12.541620321s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:08.510846126 +0000 UTC m=+149.533383751" watchObservedRunningTime="2026-01-23 13:35:08.541620321 +0000 UTC m=+149.564157946" Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.549667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.570008 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.069984969 +0000 UTC m=+150.092522594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.570440 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.570476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" event={"ID":"a88dbdcd-6064-4186-8edd-16341379ef97","Type":"ContainerStarted","Data":"8bcf63040344cd807116ec88aa4199ebc847ad0efbe657f3978699a2d69d6c4a"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.610269 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" event={"ID":"7eaa22e2-97b3-4e61-835d-c9c293d1c515","Type":"ContainerStarted","Data":"15faf615135a313c832fd4e7b3113d187463cb9f1f359f9f9d876864e7a453d1"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.626353 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" event={"ID":"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578","Type":"ContainerStarted","Data":"24c2f86895052bd11be3ccc9e6f2689ac2800e4d1cd1e4183b36ca3ba85123d9"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.627822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" event={"ID":"125049fc-2ad3-4834-929b-58894ab55ec7","Type":"ContainerStarted","Data":"c0fe3b3c89de2a07f189b3bf524ca979864a26838825ddea5853b396c55d5b30"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.628843 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" event={"ID":"48f25d01-9b0c-4851-9f6b-4a49fc631e4c","Type":"ContainerStarted","Data":"abc9dd69238f5cc36c402cf0edee1e97067cb75f989d23fb723a0c2cccd20198"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.629336 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.640090 4771 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c59wl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.640157 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.650175 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" event={"ID":"b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9","Type":"ContainerStarted","Data":"c0fcc598aa05444b97351826ed1063255816938ba5612f235e4e88c75f2de682"} Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.663048 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-578qn" Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.669082 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-s9r77" podStartSLOduration=132.669061445 podStartE2EDuration="2m12.669061445s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:08.622106785 +0000 UTC m=+149.644644420" watchObservedRunningTime="2026-01-23 13:35:08.669061445 +0000 UTC m=+149.691599070" Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.671951 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.672279 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.172261079 +0000 UTC m=+150.194798704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.773219 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.778198 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.278174526 +0000 UTC m=+150.300712341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.783075 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92"] Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.874231 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.874582 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.374563844 +0000 UTC m=+150.397101469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: I0123 13:35:08.978086 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:08 crc kubenswrapper[4771]: E0123 13:35:08.978932 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.47891615 +0000 UTC m=+150.501453775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:08 crc kubenswrapper[4771]: W0123 13:35:08.993510 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cc8f639_4d97_4b72_9453_c3d5ede2b322.slice/crio-c13211968a970a8ad14b7f957b2c43054cf17ca5286e16c3b35f02a1eaad2e52 WatchSource:0}: Error finding container c13211968a970a8ad14b7f957b2c43054cf17ca5286e16c3b35f02a1eaad2e52: Status 404 returned error can't find the container with id c13211968a970a8ad14b7f957b2c43054cf17ca5286e16c3b35f02a1eaad2e52 Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.032699 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" podStartSLOduration=133.03267327 podStartE2EDuration="2m13.03267327s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:08.989968978 +0000 UTC m=+150.012506603" watchObservedRunningTime="2026-01-23 13:35:09.03267327 +0000 UTC m=+150.055210895" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.079004 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.079388 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.579371461 +0000 UTC m=+150.601909086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.090263 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" podStartSLOduration=133.090241262 podStartE2EDuration="2m13.090241262s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:09.032572917 +0000 UTC m=+150.055110552" watchObservedRunningTime="2026-01-23 13:35:09.090241262 +0000 UTC m=+150.112778887" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.172861 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-84f77" podStartSLOduration=133.172835825 podStartE2EDuration="2m13.172835825s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:09.157816309 +0000 UTC m=+150.180353944" watchObservedRunningTime="2026-01-23 13:35:09.172835825 +0000 UTC m=+150.195373470" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.180301 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.180644 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.680631617 +0000 UTC m=+150.703169242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.256134 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-m58rh"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.256194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2nhhg"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.285256 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.286139 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.786098189 +0000 UTC m=+150.808635814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.325328 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.349696 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-c9dr9"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.351955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.388017 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.388488 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.888470742 +0000 UTC m=+150.911008367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: W0123 13:35:09.397023 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6a46658_8c32_45d8_bf00_1cf0ba747194.slice/crio-924abd18bbbb2ec138ffd2c57f1363110c04578d8059444d60c132ac57f42ddd WatchSource:0}: Error finding container 924abd18bbbb2ec138ffd2c57f1363110c04578d8059444d60c132ac57f42ddd: Status 404 returned error can't find the container with id 924abd18bbbb2ec138ffd2c57f1363110c04578d8059444d60c132ac57f42ddd Jan 23 13:35:09 crc kubenswrapper[4771]: W0123 13:35:09.440511 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod111f88b6_b7d7_4f59_9448_78697734f048.slice/crio-ed0990ce2c2a5bced03acf999a47cdd41a40377effaab131e630884a656acf50 WatchSource:0}: Error finding container ed0990ce2c2a5bced03acf999a47cdd41a40377effaab131e630884a656acf50: Status 404 returned error can't find the container with id ed0990ce2c2a5bced03acf999a47cdd41a40377effaab131e630884a656acf50 Jan 23 13:35:09 crc kubenswrapper[4771]: W0123 13:35:09.443041 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad75c66_50a7_4716_8e83_1335b20d6d07.slice/crio-93e2e2cc47ac7c7445a2fbb500d57c2fb84ca7f662d95d04bc5dc8b2cf918fa6 WatchSource:0}: Error finding container 93e2e2cc47ac7c7445a2fbb500d57c2fb84ca7f662d95d04bc5dc8b2cf918fa6: Status 404 returned error can't find the container with id 93e2e2cc47ac7c7445a2fbb500d57c2fb84ca7f662d95d04bc5dc8b2cf918fa6 Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.490201 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.490716 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:09.990679158 +0000 UTC m=+151.013216793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.567931 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.591896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.592268 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.092253425 +0000 UTC m=+151.114791050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: W0123 13:35:09.599928 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff39a4f5_5820_481f_9209_08004e3e5280.slice/crio-8a111ef7752f20895a9cf6115b0a8100f06ae6a401f0d6b135444e51abec3de8 WatchSource:0}: Error finding container 8a111ef7752f20895a9cf6115b0a8100f06ae6a401f0d6b135444e51abec3de8: Status 404 returned error can't find the container with id 8a111ef7752f20895a9cf6115b0a8100f06ae6a401f0d6b135444e51abec3de8 Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.691613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" event={"ID":"ef981f89-01c0-438a-a1b3-1f0e18d3496e","Type":"ContainerStarted","Data":"9de622ac68ec741c6e15625a0654aa0441767a63a85bdaff41bd14e01edc6ca4"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.692467 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.692741 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.192726156 +0000 UTC m=+151.215263781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.699130 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" event={"ID":"60374146-a25c-42d9-82d8-dcad9368144c","Type":"ContainerStarted","Data":"efbbd13ba52c70abfd428464fb66b84500d01f04639d5e8883c32cb9731fd163"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.724101 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" event={"ID":"125049fc-2ad3-4834-929b-58894ab55ec7","Type":"ContainerStarted","Data":"406e24c09ff8ca7e6e97ae88ecfa6082039747baf1314cedc6002d8c0c121890"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.749442 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.749807 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.766683 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" event={"ID":"ff39a4f5-5820-481f-9209-08004e3e5280","Type":"ContainerStarted","Data":"8a111ef7752f20895a9cf6115b0a8100f06ae6a401f0d6b135444e51abec3de8"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.772540 4771 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gftf6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]log ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]etcd ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/max-in-flight-filter ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 13:35:09 crc kubenswrapper[4771]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 13:35:09 crc kubenswrapper[4771]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/openshift.io-startinformers ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 13:35:09 crc kubenswrapper[4771]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 13:35:09 crc kubenswrapper[4771]: livez check failed Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.772625 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" podUID="b6ac54a4-888f-4c81-b7eb-5b5ee0cce5b9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.774664 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" event={"ID":"111f88b6-b7d7-4f59-9448-78697734f048","Type":"ContainerStarted","Data":"ed0990ce2c2a5bced03acf999a47cdd41a40377effaab131e630884a656acf50"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.781190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" event={"ID":"218b450b-13f3-42fe-b5e4-c51ca704b015","Type":"ContainerStarted","Data":"b0ac042d8c2e5b9c6df1cc3a64bcef3823895ce2bfc4b7b505f932bd0d75ec59"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.794245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.794633 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.294618023 +0000 UTC m=+151.317155648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.797308 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" event={"ID":"da1ca945-ccee-4468-8941-13ce9115dc6d","Type":"ContainerStarted","Data":"b92004499f1f3ccfa6d9466d0beeea63c70cf97fc5850b48f26fd7de32321406"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.810613 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.826507 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hgksm" event={"ID":"8d937404-443a-4d0c-ab8c-4d61cebc4b18","Type":"ContainerStarted","Data":"60aab662cdc92de97b512976d1cb755c0af33347b95b59dcb5f97cc613de2b98"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.826772 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.837136 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgksm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.837199 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgksm" podUID="8d937404-443a-4d0c-ab8c-4d61cebc4b18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 23 13:35:09 crc kubenswrapper[4771]: W0123 13:35:09.840371 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3434d12e_d777_4664_a29a_1d2598306b09.slice/crio-1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601 WatchSource:0}: Error finding container 1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601: Status 404 returned error can't find the container with id 1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601 Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.843359 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2nhhg" event={"ID":"8e7e312a-b17a-497c-b14a-11bdecbe5d0c","Type":"ContainerStarted","Data":"87371d163e63449222ea45abab1feb0026f8496708b9cd22b2a6836623a28e8c"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.850712 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hgksm" podStartSLOduration=133.850692637 podStartE2EDuration="2m13.850692637s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:09.844859888 +0000 UTC m=+150.867397523" watchObservedRunningTime="2026-01-23 13:35:09.850692637 +0000 UTC m=+150.873230262" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.864213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" event={"ID":"7afe73b6-e696-4a30-90b6-8ac66d83fe51","Type":"ContainerStarted","Data":"53a3f56d89afb7d9096041766193d3e8a96bb9514633681d30d5cb12755d6360"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.866953 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.867178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" event={"ID":"3cc8f639-4d97-4b72-9453-c3d5ede2b322","Type":"ContainerStarted","Data":"c13211968a970a8ad14b7f957b2c43054cf17ca5286e16c3b35f02a1eaad2e52"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.872972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.878787 4771 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-skbsz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.878935 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.892616 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" podStartSLOduration=132.892599623 podStartE2EDuration="2m12.892599623s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:09.891306271 +0000 UTC m=+150.913843916" watchObservedRunningTime="2026-01-23 13:35:09.892599623 +0000 UTC m=+150.915137248" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.893041 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hh6f4" event={"ID":"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c","Type":"ContainerStarted","Data":"1420b5817a399e85f16fcfed4ce104ee08216432daaa7ce4f63b0bead5c04b7c"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.895160 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:09 crc kubenswrapper[4771]: E0123 13:35:09.895640 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.395613391 +0000 UTC m=+151.418151016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.911315 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" event={"ID":"1d4bbd63-09df-418e-8ced-81942892cc71","Type":"ContainerStarted","Data":"1a797d8912522671e738e715de7ab9be8a20161bcc8e15bdf3c392be3b8398ac"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.917687 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" event={"ID":"f6a46658-8c32-45d8-bf00-1cf0ba747194","Type":"ContainerStarted","Data":"924abd18bbbb2ec138ffd2c57f1363110c04578d8059444d60c132ac57f42ddd"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.933527 4771 generic.go:334] "Generic (PLEG): container finished" podID="5e4fb568-7f2c-48cf-8cb8-7888b24016d1" containerID="8216ef662b2c97a90f5e50633d3454041700664ba227461ce05b4ac4db8c93e5" exitCode=0 Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.933654 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" event={"ID":"5e4fb568-7f2c-48cf-8cb8-7888b24016d1","Type":"ContainerDied","Data":"8216ef662b2c97a90f5e50633d3454041700664ba227461ce05b4ac4db8c93e5"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.967887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" event={"ID":"aad75c66-50a7-4716-8e83-1335b20d6d07","Type":"ContainerStarted","Data":"93e2e2cc47ac7c7445a2fbb500d57c2fb84ca7f662d95d04bc5dc8b2cf918fa6"} Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.989013 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.992712 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7qwqx"] Jan 23 13:35:09 crc kubenswrapper[4771]: I0123 13:35:09.996770 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc"] Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.022362 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.035808 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.535747235 +0000 UTC m=+151.558284860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.069010 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9"] Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.131096 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.131750 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.631716349 +0000 UTC m=+151.654253974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.132060 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.132667 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.63264627 +0000 UTC m=+151.655183895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.233302 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.233475 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.733452011 +0000 UTC m=+151.755989636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.233701 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.234016 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.734007349 +0000 UTC m=+151.756544974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.334667 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.335337 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.835322737 +0000 UTC m=+151.857860352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.437782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.438084 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:10.938070442 +0000 UTC m=+151.960608067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.538850 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.539246 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.039198694 +0000 UTC m=+152.061736319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.539739 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.540229 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.040212756 +0000 UTC m=+152.062750381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.640946 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.641555 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.141525984 +0000 UTC m=+152.164063609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.743173 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.743750 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.243736392 +0000 UTC m=+152.266274007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.844843 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.845612 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.345597477 +0000 UTC m=+152.368135102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.946425 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:10 crc kubenswrapper[4771]: E0123 13:35:10.947071 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.44704974 +0000 UTC m=+152.469587365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.982870 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" event={"ID":"1d4bbd63-09df-418e-8ced-81942892cc71","Type":"ContainerStarted","Data":"e4878b223d650c5740c2c707841b48653d9bb0648245c8f155e471a639d5845f"} Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.982915 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" event={"ID":"1d4bbd63-09df-418e-8ced-81942892cc71","Type":"ContainerStarted","Data":"15370e7b6bd381747f2f24990c87a25ce09343a102ba225f209863ff52c65123"} Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.986617 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" event={"ID":"111f88b6-b7d7-4f59-9448-78697734f048","Type":"ContainerStarted","Data":"f4a91e720bc8c10fc08c65cb442f13a5134368f7bd1ea31479c676f9fcc0f6de"} Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.990869 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" event={"ID":"a88dbdcd-6064-4186-8edd-16341379ef97","Type":"ContainerStarted","Data":"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa"} Jan 23 13:35:10 crc kubenswrapper[4771]: I0123 13:35:10.998830 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.003807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" event={"ID":"218b450b-13f3-42fe-b5e4-c51ca704b015","Type":"ContainerStarted","Data":"a31bc4030b753f633383732de2b51ba499f37368be6cc77e087b805456b00b20"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.003854 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" event={"ID":"218b450b-13f3-42fe-b5e4-c51ca704b015","Type":"ContainerStarted","Data":"3728841bed7b3dea7cd3f3e9b149d192c851e9fd5597e6cbe8bc4d09e3c6f066"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.011742 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rlk5w" podStartSLOduration=134.011717012 podStartE2EDuration="2m14.011717012s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.009349435 +0000 UTC m=+152.031887060" watchObservedRunningTime="2026-01-23 13:35:11.011717012 +0000 UTC m=+152.034254647" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.036327 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8v9hg" podStartSLOduration=134.036307297 podStartE2EDuration="2m14.036307297s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.0338999 +0000 UTC m=+152.056437545" watchObservedRunningTime="2026-01-23 13:35:11.036307297 +0000 UTC m=+152.058844942" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.037295 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" event={"ID":"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6","Type":"ContainerStarted","Data":"9e8dacd86fd379a914a4458218e6641fec9e83dc66a1b5c798a09bebcd3ed912"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.037339 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.037361 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" event={"ID":"d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6","Type":"ContainerStarted","Data":"85ed83364e439741c742e60980fa49c3084a6542d9bb2bca6b2d5bd1a456c314"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.038731 4771 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sg5d9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.038774 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" podUID="d2d81bc9-a9e0-4adc-8139-ef3d0e5f90f6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.054012 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.055934 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.555913602 +0000 UTC m=+152.578451227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.083322 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" event={"ID":"da1ca945-ccee-4468-8941-13ce9115dc6d","Type":"ContainerStarted","Data":"4243687ab26086b25bd958309a3f84b8a70673b7871b8af4703443081c24a3f3"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.083720 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" event={"ID":"da1ca945-ccee-4468-8941-13ce9115dc6d","Type":"ContainerStarted","Data":"4216199fdfb4f4f6716e3d507538b8e7a09dd73ea3aa6625622cf3a0306d41f3"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.084113 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.096582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2nhhg" event={"ID":"8e7e312a-b17a-497c-b14a-11bdecbe5d0c","Type":"ContainerStarted","Data":"45e8fd08b3eb04dcba51fad0739b8dd7e8e2110767bdf642fd24a7fc7ed09567"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.103318 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" podStartSLOduration=134.103294885 podStartE2EDuration="2m14.103294885s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.099728739 +0000 UTC m=+152.122266364" watchObservedRunningTime="2026-01-23 13:35:11.103294885 +0000 UTC m=+152.125832510" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.110619 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" event={"ID":"c9a559ae-d103-4979-bb70-6fb0a326f4b5","Type":"ContainerStarted","Data":"23c0e7662540ca3d9b9035b18e1fa702c95f9024e48d9c5f377f0006319d9ef3"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.140780 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" podStartSLOduration=134.140761927 podStartE2EDuration="2m14.140761927s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.137904865 +0000 UTC m=+152.160442500" watchObservedRunningTime="2026-01-23 13:35:11.140761927 +0000 UTC m=+152.163299552" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.155780 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" event={"ID":"60374146-a25c-42d9-82d8-dcad9368144c","Type":"ContainerStarted","Data":"909a210765b034887eb35cd1ee5622ef1f62bbb6717b84cecd82900d9bedaf77"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.156950 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.158436 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.658395637 +0000 UTC m=+152.680933452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.179848 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" event={"ID":"5e4fb568-7f2c-48cf-8cb8-7888b24016d1","Type":"ContainerStarted","Data":"264d3bf830a51ae54bf7c8260a0d6795afe0b12b1bc6af99f0882bf4581b5a77"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.190065 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d67nm" podStartSLOduration=135.19001567 podStartE2EDuration="2m15.19001567s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.188218173 +0000 UTC m=+152.210755808" watchObservedRunningTime="2026-01-23 13:35:11.19001567 +0000 UTC m=+152.212553296" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.212649 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-g658k" podStartSLOduration=134.212625952 podStartE2EDuration="2m14.212625952s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.210895807 +0000 UTC m=+152.233433452" watchObservedRunningTime="2026-01-23 13:35:11.212625952 +0000 UTC m=+152.235163597" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.265517 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.265843 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.765818933 +0000 UTC m=+152.788356568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.266105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.268255 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.768241871 +0000 UTC m=+152.790779706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.270531 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" event={"ID":"2e00edbb-9068-41bf-b0af-d9a37af2880e","Type":"ContainerStarted","Data":"4ef037283019f2cd4a3d2662507a25c03265ca435a06c702ef7d1ffed6df62b1"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.270575 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" event={"ID":"2e00edbb-9068-41bf-b0af-d9a37af2880e","Type":"ContainerStarted","Data":"ea9e809a47e87999a230ff33dacb37b01cd649b03bd92cfff506df358ad8cca3"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.279678 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hh6f4" event={"ID":"800449e3-31d8-4dcb-9af1-8a5fe9ae9b8c","Type":"ContainerStarted","Data":"76ad0b1cd5ab5756b81595f1867ccdfe6dc6021edef908c5f28f3d34102a6ce9"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.347454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" event={"ID":"bedfc0c0-0c8c-4c3f-8561-e5c7f969f578","Type":"ContainerStarted","Data":"c1c106cd805332ab3d86bc0e67256b1e9d89526419ebea4d517338fe6caa7087"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.355219 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" event={"ID":"9466ce77-2c46-4094-8388-4c99168b2792","Type":"ContainerStarted","Data":"2b7ec23829aa858b4d270e5eca463d5d79d123608a5941a87d7a542eae7da3db"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.355274 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" event={"ID":"9466ce77-2c46-4094-8388-4c99168b2792","Type":"ContainerStarted","Data":"a4b07806c06d8879a46f507917d8efc84dec844adf71020fd6a022d5c76e387b"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.357102 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" event={"ID":"7afe73b6-e696-4a30-90b6-8ac66d83fe51","Type":"ContainerStarted","Data":"17ca9567c352edd29f919e217eb1e1415d7608d94df601c56c16405c3d64751b"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.357701 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.367026 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.368676 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.86864895 +0000 UTC m=+152.891186585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.388791 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" event={"ID":"ff39a4f5-5820-481f-9209-08004e3e5280","Type":"ContainerStarted","Data":"1f2ad5b63257777b96b213525665ee3320bc2bdf4f28bd0320627723fc22cd52"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.392577 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.402070 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8tfhc" podStartSLOduration=134.402050511 podStartE2EDuration="2m14.402050511s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.36771781 +0000 UTC m=+152.390255456" watchObservedRunningTime="2026-01-23 13:35:11.402050511 +0000 UTC m=+152.424588136" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.411570 4771 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fbmxq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.411640 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.411724 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.413023 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" podStartSLOduration=134.413010326 podStartE2EDuration="2m14.413010326s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.249302009 +0000 UTC m=+152.271839644" watchObservedRunningTime="2026-01-23 13:35:11.413010326 +0000 UTC m=+152.435547951" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.422303 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hh6f4" podStartSLOduration=8.422287906 podStartE2EDuration="8.422287906s" podCreationTimestamp="2026-01-23 13:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.421239342 +0000 UTC m=+152.443776967" watchObservedRunningTime="2026-01-23 13:35:11.422287906 +0000 UTC m=+152.444825531" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.443778 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" event={"ID":"7eaa22e2-97b3-4e61-835d-c9c293d1c515","Type":"ContainerStarted","Data":"a2bb1a91ecf207b7ecd9196775ed90faade42563337a0dcf616ee2fda6a7e190"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.459253 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" event={"ID":"f6a46658-8c32-45d8-bf00-1cf0ba747194","Type":"ContainerStarted","Data":"30a889f9589f5014fb903088f5cc90157740d820df883ca95f500f8ad5fe2c4d"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.471239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.475761 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:11.975741885 +0000 UTC m=+152.998279500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.500826 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" event={"ID":"aad75c66-50a7-4716-8e83-1335b20d6d07","Type":"ContainerStarted","Data":"d0fa80a89695e9edfd50ad40d1f6bc4a45f2b3724a7aeb0585fae736599ee9ef"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.501435 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.550979 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5k7z5" event={"ID":"67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e","Type":"ContainerStarted","Data":"02436dc11e8541c23ce0d52af88b1ca6d14656cf26571352cb98942b84f9b5ed"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.571979 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.572292 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.072257208 +0000 UTC m=+153.094794853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.579752 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8xvh2" podStartSLOduration=134.57973146 podStartE2EDuration="2m14.57973146s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.501375915 +0000 UTC m=+152.523913540" watchObservedRunningTime="2026-01-23 13:35:11.57973146 +0000 UTC m=+152.602269085" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.581154 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.598771 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" event={"ID":"28667350-f72e-42d9-92d3-1e45074aa44c","Type":"ContainerStarted","Data":"fcf693d7940a4a94cfddb4f09fe3837080a1d9b1a6e339034889433661a5e6c5"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.639718 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wzpft" podStartSLOduration=134.639697111 podStartE2EDuration="2m14.639697111s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.581019912 +0000 UTC m=+152.603557537" watchObservedRunningTime="2026-01-23 13:35:11.639697111 +0000 UTC m=+152.662234726" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.641666 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjhrj" podStartSLOduration=134.641659524 podStartE2EDuration="2m14.641659524s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.640758054 +0000 UTC m=+152.663295679" watchObservedRunningTime="2026-01-23 13:35:11.641659524 +0000 UTC m=+152.664197139" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.655337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" event={"ID":"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019","Type":"ContainerStarted","Data":"161db50393c3f4625ca389f0620bc88476e86863a6e33bd52d219af6a2751e5b"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.689321 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.690530 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.190515324 +0000 UTC m=+153.213052949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.703454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" event={"ID":"ea9c810d-0ac0-4528-98a7-a3b349a28a9e","Type":"ContainerStarted","Data":"034b208c6e2ba46412659a125f55290739a2b0437b6cd9dd88b7b372c7f6b24c"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.703498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" event={"ID":"ea9c810d-0ac0-4528-98a7-a3b349a28a9e","Type":"ContainerStarted","Data":"2e4d82ec3d7f1ce0cad4a9925bc82617ae9fc1f726f84b00d26c252b0c2356cb"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.739711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" event={"ID":"251bb7d1-205b-4625-980e-3636bb66f8bc","Type":"ContainerStarted","Data":"ad07ec4dcb6747488d0de34ce5363cbcdd46a1955f606d78bc865cc3b6057336"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.773716 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7qwqx" event={"ID":"bb331896-5a8e-466c-a152-ef8b744289d2","Type":"ContainerStarted","Data":"58926baa35e513a945713ab2661902a7ccc1d3a2093fdb2b6264f971657e6c4b"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.773775 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7qwqx" event={"ID":"bb331896-5a8e-466c-a152-ef8b744289d2","Type":"ContainerStarted","Data":"ac2abbdbbb34ae3236776bb632a65ab4c0298adafe1cd791642e1cdd422dc977"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.790320 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.790545 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.29051026 +0000 UTC m=+153.313047935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.794896 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5k7z5" podStartSLOduration=135.794871621 podStartE2EDuration="2m15.794871621s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.793702184 +0000 UTC m=+152.816239829" watchObservedRunningTime="2026-01-23 13:35:11.794871621 +0000 UTC m=+152.817409246" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.800244 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" event={"ID":"3cc8f639-4d97-4b72-9453-c3d5ede2b322","Type":"ContainerStarted","Data":"5816aff94eaa2114a9a397da460cd5cc0a5428ae81d678db554d14f2a27152da"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.800302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" event={"ID":"3cc8f639-4d97-4b72-9453-c3d5ede2b322","Type":"ContainerStarted","Data":"84885cdbcc836e8354190b380205656e0919db3169b0ca354903f106f6288006"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.801605 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkjjk" podStartSLOduration=134.801567238 podStartE2EDuration="2m14.801567238s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.68852996 +0000 UTC m=+152.711067585" watchObservedRunningTime="2026-01-23 13:35:11.801567238 +0000 UTC m=+152.824104873" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.829735 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" event={"ID":"26f3377f-65d2-4673-bab4-ad00eb946a4d","Type":"ContainerStarted","Data":"f088c8e72ff799c834e5c7ed7ccbf660ff2e11cad37e31191f0f7ee571a93feb"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.829794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" event={"ID":"26f3377f-65d2-4673-bab4-ad00eb946a4d","Type":"ContainerStarted","Data":"3d01ecdd24d5012c4555b895ca0c98374e02fe7f989e8ddc7a3297f1310bb92d"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.834046 4771 generic.go:334] "Generic (PLEG): container finished" podID="7616383c-6e4c-4ae6-8fe8-f573ab4cdad9" containerID="0b57232f4c36be3453804fea9f16834365a7ce50b3670f4537aed12a2e5617a4" exitCode=0 Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.834113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" event={"ID":"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9","Type":"ContainerDied","Data":"0b57232f4c36be3453804fea9f16834365a7ce50b3670f4537aed12a2e5617a4"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.870850 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" event={"ID":"ef981f89-01c0-438a-a1b3-1f0e18d3496e","Type":"ContainerStarted","Data":"76cd4fe634652d914434cf5ae9708549add23aca6f7b069f32e75eff498ef548"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.888091 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-m58rh" podStartSLOduration=134.888071417 podStartE2EDuration="2m14.888071417s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.887499248 +0000 UTC m=+152.910036883" watchObservedRunningTime="2026-01-23 13:35:11.888071417 +0000 UTC m=+152.910609042" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.901837 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" event={"ID":"b18bc30d-8ab7-4e77-a7cd-76e18fd0e79a","Type":"ContainerStarted","Data":"e0d98ccfd4d2ca69f40db6760ceaea1d77c50cf3010e212d0b77c6ebe2d1813d"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.902654 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:11 crc kubenswrapper[4771]: E0123 13:35:11.904564 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.40454998 +0000 UTC m=+153.427087605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.936461 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2cntv" podStartSLOduration=135.936443542 podStartE2EDuration="2m15.936443542s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.935449539 +0000 UTC m=+152.957987164" watchObservedRunningTime="2026-01-23 13:35:11.936443542 +0000 UTC m=+152.958981177" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.941292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" event={"ID":"3434d12e-d777-4664-a29a-1d2598306b09","Type":"ContainerStarted","Data":"f98063633c73baecbcacef77b0c1a7e98317ae3aed02c284ac703573c5dedf91"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.941334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" event={"ID":"3434d12e-d777-4664-a29a-1d2598306b09","Type":"ContainerStarted","Data":"1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.969878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" event={"ID":"125049fc-2ad3-4834-929b-58894ab55ec7","Type":"ContainerStarted","Data":"ae0f2ce5bd358387c0ad54becef2443203f93eaa510e7844ad34d8680dc943ca"} Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.971015 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgksm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.971064 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgksm" podUID="8d937404-443a-4d0c-ab8c-4d61cebc4b18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 23 13:35:11 crc kubenswrapper[4771]: I0123 13:35:11.978366 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" podStartSLOduration=134.978341227 podStartE2EDuration="2m14.978341227s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:11.977470629 +0000 UTC m=+153.000008274" watchObservedRunningTime="2026-01-23 13:35:11.978341227 +0000 UTC m=+153.000878862" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.005098 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.006579 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.506556831 +0000 UTC m=+153.529094456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.018635 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sj8wv" podStartSLOduration=135.01861203 podStartE2EDuration="2m15.01861203s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.01521862 +0000 UTC m=+153.037756245" watchObservedRunningTime="2026-01-23 13:35:12.01861203 +0000 UTC m=+153.041149645" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.051327 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-95nnz" podStartSLOduration=136.051306578 podStartE2EDuration="2m16.051306578s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.050961607 +0000 UTC m=+153.073499242" watchObservedRunningTime="2026-01-23 13:35:12.051306578 +0000 UTC m=+153.073844203" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.109938 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.113643 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.613623314 +0000 UTC m=+153.636161149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.142497 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.149244 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:12 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:12 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:12 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.149297 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.204084 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z5t5f" podStartSLOduration=135.20404911 podStartE2EDuration="2m15.20404911s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.144024007 +0000 UTC m=+153.166561632" watchObservedRunningTime="2026-01-23 13:35:12.20404911 +0000 UTC m=+153.226586735" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.211199 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.211934 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.711915394 +0000 UTC m=+153.734453019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.230939 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-hrl92" podStartSLOduration=135.230919609 podStartE2EDuration="2m15.230919609s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.230676151 +0000 UTC m=+153.253213786" watchObservedRunningTime="2026-01-23 13:35:12.230919609 +0000 UTC m=+153.253457234" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.262103 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7qwqx" podStartSLOduration=8.262078857 podStartE2EDuration="8.262078857s" podCreationTimestamp="2026-01-23 13:35:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.260805517 +0000 UTC m=+153.283343142" watchObservedRunningTime="2026-01-23 13:35:12.262078857 +0000 UTC m=+153.284616492" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.313115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.313488 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.813473231 +0000 UTC m=+153.836010856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.357785 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" podStartSLOduration=136.357766853 podStartE2EDuration="2m16.357766853s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.305313067 +0000 UTC m=+153.327850692" watchObservedRunningTime="2026-01-23 13:35:12.357766853 +0000 UTC m=+153.380304478" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.392842 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z6bkl" podStartSLOduration=136.392821048 podStartE2EDuration="2m16.392821048s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.392719995 +0000 UTC m=+153.415257620" watchObservedRunningTime="2026-01-23 13:35:12.392821048 +0000 UTC m=+153.415358673" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.414903 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.415435 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:12.915399568 +0000 UTC m=+153.937937193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.469115 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6l4kn" podStartSLOduration=135.469092215 podStartE2EDuration="2m15.469092215s" podCreationTimestamp="2026-01-23 13:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.432228642 +0000 UTC m=+153.454766267" watchObservedRunningTime="2026-01-23 13:35:12.469092215 +0000 UTC m=+153.491629860" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.503823 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8d7hk" podStartSLOduration=136.503798978 podStartE2EDuration="2m16.503798978s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.470893424 +0000 UTC m=+153.493431059" watchObservedRunningTime="2026-01-23 13:35:12.503798978 +0000 UTC m=+153.526336603" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.517226 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.517663 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.017646966 +0000 UTC m=+154.040184591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.559193 4771 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.618963 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.619146 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.11911742 +0000 UTC m=+154.141655045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.619381 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.619789 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.11977109 +0000 UTC m=+154.142308715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.720025 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.720391 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.220373536 +0000 UTC m=+154.242911161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.821134 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.821498 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.321478146 +0000 UTC m=+154.344015771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.866899 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8r5lw" podStartSLOduration=136.866880686 podStartE2EDuration="2m16.866880686s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.504710838 +0000 UTC m=+153.527248473" watchObservedRunningTime="2026-01-23 13:35:12.866880686 +0000 UTC m=+153.889418311" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.867621 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.868657 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.870468 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.880382 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.923130 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.923427 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.423345533 +0000 UTC m=+154.445883168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.923627 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.923764 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.923843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.923940 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfg4c\" (UniqueName: \"kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:12 crc kubenswrapper[4771]: E0123 13:35:12.924264 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.424242572 +0000 UTC m=+154.446780197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.977823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" event={"ID":"7616383c-6e4c-4ae6-8fe8-f573ab4cdad9","Type":"ContainerStarted","Data":"ced96e51172aeac6e59c7352b464254d98533a13e899ad297ea6349c889e027f"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.978851 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.980378 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" event={"ID":"111f88b6-b7d7-4f59-9448-78697734f048","Type":"ContainerStarted","Data":"9ded8572e253466ab27c7dbd98c11e12bf522f7ad5924b0666822bfa7a86255e"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.982621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2nhhg" event={"ID":"8e7e312a-b17a-497c-b14a-11bdecbe5d0c","Type":"ContainerStarted","Data":"cfde1cd4363b7fb2dfec56bd34972bf044be65d070a3ecae8caffde796ffcfde"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.982801 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.985125 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" event={"ID":"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019","Type":"ContainerStarted","Data":"a45d0f3fb790e011db53f157ca94c00926f4f9b40eeaf71bbc53840ecd3ea7bf"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.985152 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" event={"ID":"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019","Type":"ContainerStarted","Data":"30ab82045b870aba5fd002583e8bdb40f870c8a00d14ad2d6c52ce6a3a5d1638"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.985163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" event={"ID":"cbdad9fd-f5c7-4ae8-ab0e-1c9e3fbdf019","Type":"ContainerStarted","Data":"7cbe9ed5b31d993edfdeaa6fb8720d95551ece5d32934d796f37f56c659fe66b"} Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.997022 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" podStartSLOduration=136.996996266 podStartE2EDuration="2m16.996996266s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:12.993869075 +0000 UTC m=+154.016406710" watchObservedRunningTime="2026-01-23 13:35:12.996996266 +0000 UTC m=+154.019533911" Jan 23 13:35:12 crc kubenswrapper[4771]: I0123 13:35:12.997057 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.018831 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2nhhg" podStartSLOduration=10.018812842 podStartE2EDuration="10.018812842s" podCreationTimestamp="2026-01-23 13:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:13.013851172 +0000 UTC m=+154.036388827" watchObservedRunningTime="2026-01-23 13:35:13.018812842 +0000 UTC m=+154.041350467" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.025540 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.026349 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.026965 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: E0123 13:35:13.027216 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.527175632 +0000 UTC m=+154.549713257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.027597 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.027842 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfg4c\" (UniqueName: \"kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.028144 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: E0123 13:35:13.032097 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.532080891 +0000 UTC m=+154.554618726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.033340 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.052214 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sg5d9" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.054330 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-lc7gv" podStartSLOduration=10.05431544 podStartE2EDuration="10.05431544s" podCreationTimestamp="2026-01-23 13:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:13.053796173 +0000 UTC m=+154.076333788" watchObservedRunningTime="2026-01-23 13:35:13.05431544 +0000 UTC m=+154.076853065" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.078049 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.081228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.082620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfg4c\" (UniqueName: \"kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c\") pod \"community-operators-vfx4h\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.083587 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.090681 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-c9dr9" podStartSLOduration=137.090654866 podStartE2EDuration="2m17.090654866s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:13.083001719 +0000 UTC m=+154.105539354" watchObservedRunningTime="2026-01-23 13:35:13.090654866 +0000 UTC m=+154.113192491" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.100074 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.132666 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.133005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.133045 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.133078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwjj\" (UniqueName: \"kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: E0123 13:35:13.133248 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.633222603 +0000 UTC m=+154.655760228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.153887 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:13 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:13 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:13 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.153979 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.226033 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.235295 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.235374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.235393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.235438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwjj\" (UniqueName: \"kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: E0123 13:35:13.236103 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.736087591 +0000 UTC m=+154.758625216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-grzg6" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.236692 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.236928 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.274259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwjj\" (UniqueName: \"kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj\") pod \"certified-operators-622jl\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.276530 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.277624 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.293759 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.336226 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.336428 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.336483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.336509 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkt98\" (UniqueName: \"kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: E0123 13:35:13.336651 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 13:35:13.836634465 +0000 UTC m=+154.859172090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.394594 4771 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T13:35:12.559627355Z","Handler":null,"Name":""} Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.405770 4771 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.405842 4771 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.419761 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.439115 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.439174 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.439190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkt98\" (UniqueName: \"kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.439248 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.440055 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.440288 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.456036 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.456081 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.485339 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.490347 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkt98\" (UniqueName: \"kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98\") pod \"community-operators-ksd87\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.501690 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.512738 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.545731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.545774 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chdr\" (UniqueName: \"kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.545788 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.588309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-grzg6\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.603893 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.642938 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.648499 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.648648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.648693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6chdr\" (UniqueName: \"kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.648713 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.649548 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.650999 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: W0123 13:35:13.659381 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod330cd6a7_1942_4bf5_a7fc_b3acb8a00cf9.slice/crio-e2c32de3a931cb47ea93fd12634853ed98b679d339d4ffd914a7c367e0f2b9aa WatchSource:0}: Error finding container e2c32de3a931cb47ea93fd12634853ed98b679d339d4ffd914a7c367e0f2b9aa: Status 404 returned error can't find the container with id e2c32de3a931cb47ea93fd12634853ed98b679d339d4ffd914a7c367e0f2b9aa Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.675870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6chdr\" (UniqueName: \"kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr\") pod \"certified-operators-6vlbq\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.683887 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.695816 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.753450 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.838214 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.870019 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.915105 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:35:13 crc kubenswrapper[4771]: W0123 13:35:13.922955 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44d2ff5d_162b_4773_ac29_54fa11375b9a.slice/crio-bbe82e8a8236b79303fc6a9ef72b9df80b3f15ae955ddef8c377065ee093b77c WatchSource:0}: Error finding container bbe82e8a8236b79303fc6a9ef72b9df80b3f15ae955ddef8c377065ee093b77c: Status 404 returned error can't find the container with id bbe82e8a8236b79303fc6a9ef72b9df80b3f15ae955ddef8c377065ee093b77c Jan 23 13:35:13 crc kubenswrapper[4771]: I0123 13:35:13.990716 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerStarted","Data":"4e36110ea71231d52c4a663a852a75edece1c9e04f607f57887a301544d9e46e"} Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:13.991372 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerStarted","Data":"1f7b4d9545c9d2caaea62dde5382b09c52d4121abafa4e1ebddd3c2f1e907e82"} Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:13.992108 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" event={"ID":"44d2ff5d-162b-4773-ac29-54fa11375b9a","Type":"ContainerStarted","Data":"bbe82e8a8236b79303fc6a9ef72b9df80b3f15ae955ddef8c377065ee093b77c"} Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:13.993378 4771 generic.go:334] "Generic (PLEG): container finished" podID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerID="f369a2c5bc53ba930a94cda8309995fb71656640670119c087fe275561103c06" exitCode=0 Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:13.993537 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerDied","Data":"f369a2c5bc53ba930a94cda8309995fb71656640670119c087fe275561103c06"} Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:13.993567 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerStarted","Data":"e2c32de3a931cb47ea93fd12634853ed98b679d339d4ffd914a7c367e0f2b9aa"} Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.086072 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.114926 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.143993 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:14 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:14 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:14 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.144073 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:14 crc kubenswrapper[4771]: W0123 13:35:14.202060 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ec392b_4022_454f_ba4b_1a4d4d2edd87.slice/crio-1fdf096ff04f5e4272563369bd7a3e63bdc11c30d6aeae06258976c660ac297d WatchSource:0}: Error finding container 1fdf096ff04f5e4272563369bd7a3e63bdc11c30d6aeae06258976c660ac297d: Status 404 returned error can't find the container with id 1fdf096ff04f5e4272563369bd7a3e63bdc11c30d6aeae06258976c660ac297d Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.756138 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:14 crc kubenswrapper[4771]: I0123 13:35:14.763887 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gftf6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.000441 4771 generic.go:334] "Generic (PLEG): container finished" podID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerID="0b5f307e13bbe5d0c3916b9739dd67f0aaf1f4a38f5c45bedc8f49a86d056aff" exitCode=0 Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.000520 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerDied","Data":"0b5f307e13bbe5d0c3916b9739dd67f0aaf1f4a38f5c45bedc8f49a86d056aff"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.003139 4771 generic.go:334] "Generic (PLEG): container finished" podID="3434d12e-d777-4664-a29a-1d2598306b09" containerID="f98063633c73baecbcacef77b0c1a7e98317ae3aed02c284ac703573c5dedf91" exitCode=0 Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.003236 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" event={"ID":"3434d12e-d777-4664-a29a-1d2598306b09","Type":"ContainerDied","Data":"f98063633c73baecbcacef77b0c1a7e98317ae3aed02c284ac703573c5dedf91"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.005010 4771 generic.go:334] "Generic (PLEG): container finished" podID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerID="732b1dbdef267e136f6020990d2379aa06834bbce9387c6ee5964f13410d3e0e" exitCode=0 Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.005078 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerDied","Data":"732b1dbdef267e136f6020990d2379aa06834bbce9387c6ee5964f13410d3e0e"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.011452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" event={"ID":"44d2ff5d-162b-4773-ac29-54fa11375b9a","Type":"ContainerStarted","Data":"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.011607 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.013183 4771 generic.go:334] "Generic (PLEG): container finished" podID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerID="dc98070498686ec4bbfdaba011f96a9befb97e3073c805c46c9c30a26e9b5d33" exitCode=0 Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.013213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerDied","Data":"dc98070498686ec4bbfdaba011f96a9befb97e3073c805c46c9c30a26e9b5d33"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.013248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerStarted","Data":"1fdf096ff04f5e4272563369bd7a3e63bdc11c30d6aeae06258976c660ac297d"} Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.020741 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-prp7p" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.060106 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.061394 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.063123 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.080876 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.139553 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" podStartSLOduration=139.139528537 podStartE2EDuration="2m19.139528537s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:15.138745202 +0000 UTC m=+156.161282827" watchObservedRunningTime="2026-01-23 13:35:15.139528537 +0000 UTC m=+156.162066172" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.144947 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:15 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:15 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:15 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.145008 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.176473 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtfk6\" (UniqueName: \"kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.177274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.177435 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.239632 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.278957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtfk6\" (UniqueName: \"kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.279027 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.279052 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.279708 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.279754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.310598 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtfk6\" (UniqueName: \"kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6\") pod \"redhat-marketplace-vk9j6\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.376894 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.462464 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.464054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.472977 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.481858 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.481999 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.482027 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-994ss\" (UniqueName: \"kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.582861 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.582907 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-994ss\" (UniqueName: \"kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.582976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.583961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.584011 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.618310 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-994ss\" (UniqueName: \"kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss\") pod \"redhat-marketplace-xmpvn\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.633709 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:35:15 crc kubenswrapper[4771]: W0123 13:35:15.648558 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e885f8a_42cc_49ad_9b52_759c9adb8ad7.slice/crio-f5a410657b522e0322de9af3d51075444f57bf00eb7a376103aaca5e3c77e7ff WatchSource:0}: Error finding container f5a410657b522e0322de9af3d51075444f57bf00eb7a376103aaca5e3c77e7ff: Status 404 returned error can't find the container with id f5a410657b522e0322de9af3d51075444f57bf00eb7a376103aaca5e3c77e7ff Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.793788 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.904380 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.904996 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.905987 4771 patch_prober.go:28] interesting pod/console-f9d7485db-84f77 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.906041 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-84f77" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.925498 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgksm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.925566 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hgksm" podUID="8d937404-443a-4d0c-ab8c-4d61cebc4b18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.925591 4771 patch_prober.go:28] interesting pod/downloads-7954f5f757-hgksm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 23 13:35:15 crc kubenswrapper[4771]: I0123 13:35:15.925645 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hgksm" podUID="8d937404-443a-4d0c-ab8c-4d61cebc4b18" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.035957 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerID="3d8031716fd60e9c070afb10eb1805aa4ac8e716d30f770c46e561c452c6e0d1" exitCode=0 Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.036998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerDied","Data":"3d8031716fd60e9c070afb10eb1805aa4ac8e716d30f770c46e561c452c6e0d1"} Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.037020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerStarted","Data":"f5a410657b522e0322de9af3d51075444f57bf00eb7a376103aaca5e3c77e7ff"} Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.082499 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.083453 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.086120 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.095876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.095922 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.095969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psr2q\" (UniqueName: \"kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.098269 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.143919 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:16 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:16 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:16 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.143965 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.164142 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.164650 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.174340 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.201049 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.201247 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.201364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psr2q\" (UniqueName: \"kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.223519 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.236755 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.248256 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psr2q\" (UniqueName: \"kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q\") pod \"redhat-operators-lgxnf\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.437242 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.468910 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.470098 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.507082 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.521113 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.625937 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.626472 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6rr\" (UniqueName: \"kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.626509 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.756829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.757105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6rr\" (UniqueName: \"kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.757130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.757732 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.757783 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.762962 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.764238 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.768946 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.769156 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.774128 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.799595 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6rr\" (UniqueName: \"kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr\") pod \"redhat-operators-zfpkr\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.860314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.860428 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.865933 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.965515 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.965574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:16 crc kubenswrapper[4771]: I0123 13:35:16.965881 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.015959 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.016381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.067051 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume\") pod \"3434d12e-d777-4664-a29a-1d2598306b09\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.067580 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume\") pod \"3434d12e-d777-4664-a29a-1d2598306b09\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.067622 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5nxs\" (UniqueName: \"kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs\") pod \"3434d12e-d777-4664-a29a-1d2598306b09\" (UID: \"3434d12e-d777-4664-a29a-1d2598306b09\") " Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.068243 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume" (OuterVolumeSpecName: "config-volume") pod "3434d12e-d777-4664-a29a-1d2598306b09" (UID: "3434d12e-d777-4664-a29a-1d2598306b09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.073658 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs" (OuterVolumeSpecName: "kube-api-access-g5nxs") pod "3434d12e-d777-4664-a29a-1d2598306b09" (UID: "3434d12e-d777-4664-a29a-1d2598306b09"). InnerVolumeSpecName "kube-api-access-g5nxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.073696 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3434d12e-d777-4664-a29a-1d2598306b09" (UID: "3434d12e-d777-4664-a29a-1d2598306b09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.109765 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" event={"ID":"3434d12e-d777-4664-a29a-1d2598306b09","Type":"ContainerDied","Data":"1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601"} Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.109800 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a15fd4d9766c9451a6c64ffd1fcde39ebf76fa68545fabd1418dee3244c9601" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.109861 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.134213 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerStarted","Data":"753c2420c0ec3c158ba703c8995fd1421098c28ab2824e8ff06b902d9aaadc8c"} Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.143676 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.150604 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sgjq5" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.154199 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:17 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:17 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:17 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.154289 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.170747 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3434d12e-d777-4664-a29a-1d2598306b09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.170786 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3434d12e-d777-4664-a29a-1d2598306b09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.170800 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5nxs\" (UniqueName: \"kubernetes.io/projected/3434d12e-d777-4664-a29a-1d2598306b09-kube-api-access-g5nxs\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.320536 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.351736 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.586019 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:35:17 crc kubenswrapper[4771]: I0123 13:35:17.988686 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.146703 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:18 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:18 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:18 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.146899 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.163254 4771 generic.go:334] "Generic (PLEG): container finished" podID="916553c9-819c-453c-b2f1-31529bba6bef" containerID="f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8" exitCode=0 Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.163380 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerDied","Data":"f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8"} Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.190133 4771 generic.go:334] "Generic (PLEG): container finished" podID="87561a78-e043-4586-be7e-1d25dcf42382" containerID="e038384ebab99d7b60fd225744ab2268b37b256715d14e7408e205a1ecbd66a9" exitCode=0 Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.190230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerDied","Data":"e038384ebab99d7b60fd225744ab2268b37b256715d14e7408e205a1ecbd66a9"} Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.190264 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerStarted","Data":"e5afc7b6501711f3b025d6ade104ee1f6413336ebfae458891b895567f073dbb"} Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.199706 4771 generic.go:334] "Generic (PLEG): container finished" podID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerID="4c30cbbacc7b81e1a2276c964c5b0e157eeb41d214fd4c10e10e7d5dd1a3c870" exitCode=0 Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.199786 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerDied","Data":"4c30cbbacc7b81e1a2276c964c5b0e157eeb41d214fd4c10e10e7d5dd1a3c870"} Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.199813 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerStarted","Data":"e72a8872ab9169ca66ea1f206a92b31f215d080e194ffd9ea3c6593f33c08d77"} Jan 23 13:35:18 crc kubenswrapper[4771]: I0123 13:35:18.233311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e24e3fd4-ac80-4b20-b255-3774ff0e51f0","Type":"ContainerStarted","Data":"9f52aca2bb7babcfb3d498affccaffb335944eb5e45f8b890e761ec3f450c823"} Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.016953 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.017483 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 13:35:19 crc kubenswrapper[4771]: E0123 13:35:19.018197 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3434d12e-d777-4664-a29a-1d2598306b09" containerName="collect-profiles" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.018220 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3434d12e-d777-4664-a29a-1d2598306b09" containerName="collect-profiles" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.018362 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3434d12e-d777-4664-a29a-1d2598306b09" containerName="collect-profiles" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.019116 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.025120 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.025476 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.052043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b016d90-c27f-4401-99f4-859f3627e491-metrics-certs\") pod \"network-metrics-daemon-4vhqn\" (UID: \"6b016d90-c27f-4401-99f4-859f3627e491\") " pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.052044 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.064474 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4vhqn" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.121043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.121192 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.147989 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:19 crc kubenswrapper[4771]: [-]has-synced failed: reason withheld Jan 23 13:35:19 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:19 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.148063 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.222551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.222682 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.222771 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.244649 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.263838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e24e3fd4-ac80-4b20-b255-3774ff0e51f0","Type":"ContainerStarted","Data":"ba4d5ba216743bc9db410198fe0cf4aa6de9604e8475d5ffe0a17039b406949c"} Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.279196 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.279170855 podStartE2EDuration="3.279170855s" podCreationTimestamp="2026-01-23 13:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:19.276877571 +0000 UTC m=+160.299415206" watchObservedRunningTime="2026-01-23 13:35:19.279170855 +0000 UTC m=+160.301708470" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.415402 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.731443 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4vhqn"] Jan 23 13:35:19 crc kubenswrapper[4771]: W0123 13:35:19.781056 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b016d90_c27f_4401_99f4_859f3627e491.slice/crio-a7623b0e64798c2855577ce65b506cdda9d9363bc88f8669cca46e43b9f73a3f WatchSource:0}: Error finding container a7623b0e64798c2855577ce65b506cdda9d9363bc88f8669cca46e43b9f73a3f: Status 404 returned error can't find the container with id a7623b0e64798c2855577ce65b506cdda9d9363bc88f8669cca46e43b9f73a3f Jan 23 13:35:19 crc kubenswrapper[4771]: I0123 13:35:19.937992 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 13:35:20 crc kubenswrapper[4771]: I0123 13:35:20.146249 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:20 crc kubenswrapper[4771]: [+]has-synced ok Jan 23 13:35:20 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:20 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:20 crc kubenswrapper[4771]: I0123 13:35:20.146365 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:20 crc kubenswrapper[4771]: I0123 13:35:20.266978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4","Type":"ContainerStarted","Data":"bbcad01025338c91cd494db7c88d03593b4fe17703dec6c00ca464cac541f416"} Jan 23 13:35:20 crc kubenswrapper[4771]: I0123 13:35:20.271961 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" event={"ID":"6b016d90-c27f-4401-99f4-859f3627e491","Type":"ContainerStarted","Data":"a7623b0e64798c2855577ce65b506cdda9d9363bc88f8669cca46e43b9f73a3f"} Jan 23 13:35:21 crc kubenswrapper[4771]: I0123 13:35:21.147984 4771 patch_prober.go:28] interesting pod/router-default-5444994796-5k7z5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 13:35:21 crc kubenswrapper[4771]: [+]has-synced ok Jan 23 13:35:21 crc kubenswrapper[4771]: [+]process-running ok Jan 23 13:35:21 crc kubenswrapper[4771]: healthz check failed Jan 23 13:35:21 crc kubenswrapper[4771]: I0123 13:35:21.148627 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5k7z5" podUID="67cbbb78-07a1-49a2-aef1-fcb82bbbdc5e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.030019 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2nhhg" Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.145211 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.156257 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5k7z5" Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.307922 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" event={"ID":"6b016d90-c27f-4401-99f4-859f3627e491","Type":"ContainerStarted","Data":"2fe204d149b295f00c43054ac0df273920a8152ac706d89290702f8b7d887948"} Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.314536 4771 generic.go:334] "Generic (PLEG): container finished" podID="e24e3fd4-ac80-4b20-b255-3774ff0e51f0" containerID="ba4d5ba216743bc9db410198fe0cf4aa6de9604e8475d5ffe0a17039b406949c" exitCode=0 Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.314623 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e24e3fd4-ac80-4b20-b255-3774ff0e51f0","Type":"ContainerDied","Data":"ba4d5ba216743bc9db410198fe0cf4aa6de9604e8475d5ffe0a17039b406949c"} Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.318955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4","Type":"ContainerStarted","Data":"4170b8c28f66a6a7701f3b249cb151dad1d63472bbb4a026ca49f61e38fb193f"} Jan 23 13:35:22 crc kubenswrapper[4771]: I0123 13:35:22.354068 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.354050202 podStartE2EDuration="4.354050202s" podCreationTimestamp="2026-01-23 13:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:22.350195268 +0000 UTC m=+163.372732903" watchObservedRunningTime="2026-01-23 13:35:22.354050202 +0000 UTC m=+163.376587827" Jan 23 13:35:23 crc kubenswrapper[4771]: I0123 13:35:23.937851 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.075669 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access\") pod \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.075752 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir\") pod \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\" (UID: \"e24e3fd4-ac80-4b20-b255-3774ff0e51f0\") " Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.076112 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e24e3fd4-ac80-4b20-b255-3774ff0e51f0" (UID: "e24e3fd4-ac80-4b20-b255-3774ff0e51f0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.083068 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e24e3fd4-ac80-4b20-b255-3774ff0e51f0" (UID: "e24e3fd4-ac80-4b20-b255-3774ff0e51f0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.177787 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.177819 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e24e3fd4-ac80-4b20-b255-3774ff0e51f0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.352700 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4vhqn" event={"ID":"6b016d90-c27f-4401-99f4-859f3627e491","Type":"ContainerStarted","Data":"dcc7bd0df37d26da6530377213e20a8f736dd85dc68c7a2b6beff1f716ea25ab"} Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.385699 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4vhqn" podStartSLOduration=148.385660245 podStartE2EDuration="2m28.385660245s" podCreationTimestamp="2026-01-23 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:35:24.370882937 +0000 UTC m=+165.393420572" watchObservedRunningTime="2026-01-23 13:35:24.385660245 +0000 UTC m=+165.408197860" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.437557 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.437555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e24e3fd4-ac80-4b20-b255-3774ff0e51f0","Type":"ContainerDied","Data":"9f52aca2bb7babcfb3d498affccaffb335944eb5e45f8b890e761ec3f450c823"} Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.437636 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f52aca2bb7babcfb3d498affccaffb335944eb5e45f8b890e761ec3f450c823" Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.447775 4771 generic.go:334] "Generic (PLEG): container finished" podID="a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" containerID="4170b8c28f66a6a7701f3b249cb151dad1d63472bbb4a026ca49f61e38fb193f" exitCode=0 Jan 23 13:35:24 crc kubenswrapper[4771]: I0123 13:35:24.447842 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4","Type":"ContainerDied","Data":"4170b8c28f66a6a7701f3b249cb151dad1d63472bbb4a026ca49f61e38fb193f"} Jan 23 13:35:25 crc kubenswrapper[4771]: I0123 13:35:25.911188 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:25 crc kubenswrapper[4771]: I0123 13:35:25.917894 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:35:25 crc kubenswrapper[4771]: I0123 13:35:25.932138 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hgksm" Jan 23 13:35:30 crc kubenswrapper[4771]: I0123 13:35:30.312377 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:35:30 crc kubenswrapper[4771]: I0123 13:35:30.312877 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:35:33 crc kubenswrapper[4771]: I0123 13:35:33.702211 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.584121 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4","Type":"ContainerDied","Data":"bbcad01025338c91cd494db7c88d03593b4fe17703dec6c00ca464cac541f416"} Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.584597 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbcad01025338c91cd494db7c88d03593b4fe17703dec6c00ca464cac541f416" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.600671 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.777366 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir\") pod \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.777486 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access\") pod \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\" (UID: \"a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4\") " Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.777498 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" (UID: "a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.777944 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.785926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" (UID: "a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:35:36 crc kubenswrapper[4771]: I0123 13:35:36.879228 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:35:37 crc kubenswrapper[4771]: I0123 13:35:37.589523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 13:35:46 crc kubenswrapper[4771]: I0123 13:35:46.976124 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 13:35:47 crc kubenswrapper[4771]: I0123 13:35:47.280825 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xcs2g" Jan 23 13:35:53 crc kubenswrapper[4771]: E0123 13:35:53.306629 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 13:35:53 crc kubenswrapper[4771]: E0123 13:35:53.307337 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfg4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vfx4h_openshift-marketplace(330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:35:53 crc kubenswrapper[4771]: E0123 13:35:53.308539 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vfx4h" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.802427 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 13:35:55 crc kubenswrapper[4771]: E0123 13:35:55.803289 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.803350 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: E0123 13:35:55.803374 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24e3fd4-ac80-4b20-b255-3774ff0e51f0" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.803385 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24e3fd4-ac80-4b20-b255-3774ff0e51f0" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.803593 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ac7597-95fe-4e6b-a9f2-7259f6d58cf4" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.803615 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24e3fd4-ac80-4b20-b255-3774ff0e51f0" containerName="pruner" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.804202 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.808434 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.808450 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.815178 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.971846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:55 crc kubenswrapper[4771]: I0123 13:35:55.972494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: I0123 13:35:56.073932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: I0123 13:35:56.074005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: I0123 13:35:56.074015 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: I0123 13:35:56.114646 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: I0123 13:35:56.157559 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:35:56 crc kubenswrapper[4771]: E0123 13:35:56.224164 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vfx4h" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" Jan 23 13:35:56 crc kubenswrapper[4771]: E0123 13:35:56.743467 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 13:35:56 crc kubenswrapper[4771]: E0123 13:35:56.743665 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-994ss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xmpvn_openshift-marketplace(916553c9-819c-453c-b2f1-31529bba6bef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:35:56 crc kubenswrapper[4771]: E0123 13:35:56.745445 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xmpvn" podUID="916553c9-819c-453c-b2f1-31529bba6bef" Jan 23 13:35:57 crc kubenswrapper[4771]: E0123 13:35:57.600653 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 13:35:57 crc kubenswrapper[4771]: E0123 13:35:57.600941 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dkt98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ksd87_openshift-marketplace(f132f5bb-4325-44e2-9fa6-6b240a1adb31): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:35:57 crc kubenswrapper[4771]: E0123 13:35:57.602133 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ksd87" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" Jan 23 13:36:00 crc kubenswrapper[4771]: E0123 13:36:00.002927 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ksd87" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" Jan 23 13:36:00 crc kubenswrapper[4771]: E0123 13:36:00.003215 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xmpvn" podUID="916553c9-819c-453c-b2f1-31529bba6bef" Jan 23 13:36:00 crc kubenswrapper[4771]: I0123 13:36:00.312076 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:36:00 crc kubenswrapper[4771]: I0123 13:36:00.312177 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:36:00 crc kubenswrapper[4771]: I0123 13:36:00.312245 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:36:00 crc kubenswrapper[4771]: I0123 13:36:00.313238 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:36:00 crc kubenswrapper[4771]: I0123 13:36:00.313514 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235" gracePeriod=600 Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.201383 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.202513 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.216050 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.313363 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.313429 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.313574 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.414906 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.414991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.415037 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.415118 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.415239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.434142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access\") pod \"installer-9-crc\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:01 crc kubenswrapper[4771]: I0123 13:36:01.528158 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:02 crc kubenswrapper[4771]: I0123 13:36:02.198231 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235" exitCode=0 Jan 23 13:36:02 crc kubenswrapper[4771]: I0123 13:36:02.198355 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235"} Jan 23 13:36:03 crc kubenswrapper[4771]: E0123 13:36:03.650802 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 13:36:03 crc kubenswrapper[4771]: E0123 13:36:03.650989 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6chdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6vlbq_openshift-marketplace(b9ec392b-4022-454f-ba4b-1a4d4d2edd87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:36:03 crc kubenswrapper[4771]: E0123 13:36:03.652197 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6vlbq" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" Jan 23 13:36:05 crc kubenswrapper[4771]: E0123 13:36:05.072042 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 13:36:05 crc kubenswrapper[4771]: E0123 13:36:05.072215 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtfk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vk9j6_openshift-marketplace(1e885f8a-42cc-49ad-9b52-759c9adb8ad7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:36:05 crc kubenswrapper[4771]: E0123 13:36:05.073483 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vk9j6" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.079282 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vk9j6" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.079319 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6vlbq" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.113339 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.114203 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xjwjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-622jl_openshift-marketplace(a8fac1eb-9145-43c5-83c8-bda72dae51d5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.115373 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-622jl" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.170974 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.171126 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dj6rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zfpkr_openshift-marketplace(87561a78-e043-4586-be7e-1d25dcf42382): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.172337 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zfpkr" podUID="87561a78-e043-4586-be7e-1d25dcf42382" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.210603 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.210924 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psr2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lgxnf_openshift-marketplace(91fa80db-fb28-4a7e-a93d-b1213f843dc1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.212293 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lgxnf" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.245726 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zfpkr" podUID="87561a78-e043-4586-be7e-1d25dcf42382" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.254286 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-622jl" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" Jan 23 13:36:09 crc kubenswrapper[4771]: E0123 13:36:09.254548 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lgxnf" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" Jan 23 13:36:09 crc kubenswrapper[4771]: I0123 13:36:09.544996 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 13:36:09 crc kubenswrapper[4771]: W0123 13:36:09.553142 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod54222a00_e7a5_4ceb_9b33_7e5a80a434c0.slice/crio-bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328 WatchSource:0}: Error finding container bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328: Status 404 returned error can't find the container with id bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328 Jan 23 13:36:09 crc kubenswrapper[4771]: I0123 13:36:09.607432 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.250723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"54222a00-e7a5-4ceb-9b33-7e5a80a434c0","Type":"ContainerStarted","Data":"483053ab1cc84e5c5e7a48f0a202c95dfed0ff6fde2ec8f4216fcde8de62f28f"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.251367 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"54222a00-e7a5-4ceb-9b33-7e5a80a434c0","Type":"ContainerStarted","Data":"bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.253671 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.262997 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b77c97d-c7e0-47fc-9467-509fddc76a7f","Type":"ContainerStarted","Data":"46e06d0ea60893b88a7ae475cb5f1c2c76f38e62dcd7f2fa18140f41a0461b18"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.263065 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b77c97d-c7e0-47fc-9467-509fddc76a7f","Type":"ContainerStarted","Data":"feab42ae4a08dbc6b8bda3a92690913d7fc1fc234552d64ebac014eb7536df8c"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.269821 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.269790966 podStartE2EDuration="9.269790966s" podCreationTimestamp="2026-01-23 13:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:36:10.268014559 +0000 UTC m=+211.290552184" watchObservedRunningTime="2026-01-23 13:36:10.269790966 +0000 UTC m=+211.292328621" Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.271677 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerStarted","Data":"495c697dfc0ba6734fd1c349e5814963263b6cf89bed066918b395f6e57eff5b"} Jan 23 13:36:10 crc kubenswrapper[4771]: I0123 13:36:10.302241 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.302222076 podStartE2EDuration="15.302222076s" podCreationTimestamp="2026-01-23 13:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:36:10.298731284 +0000 UTC m=+211.321268909" watchObservedRunningTime="2026-01-23 13:36:10.302222076 +0000 UTC m=+211.324759701" Jan 23 13:36:11 crc kubenswrapper[4771]: I0123 13:36:11.283297 4771 generic.go:334] "Generic (PLEG): container finished" podID="0b77c97d-c7e0-47fc-9467-509fddc76a7f" containerID="46e06d0ea60893b88a7ae475cb5f1c2c76f38e62dcd7f2fa18140f41a0461b18" exitCode=0 Jan 23 13:36:11 crc kubenswrapper[4771]: I0123 13:36:11.288144 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b77c97d-c7e0-47fc-9467-509fddc76a7f","Type":"ContainerDied","Data":"46e06d0ea60893b88a7ae475cb5f1c2c76f38e62dcd7f2fa18140f41a0461b18"} Jan 23 13:36:11 crc kubenswrapper[4771]: I0123 13:36:11.302586 4771 generic.go:334] "Generic (PLEG): container finished" podID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerID="495c697dfc0ba6734fd1c349e5814963263b6cf89bed066918b395f6e57eff5b" exitCode=0 Jan 23 13:36:11 crc kubenswrapper[4771]: I0123 13:36:11.303532 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerDied","Data":"495c697dfc0ba6734fd1c349e5814963263b6cf89bed066918b395f6e57eff5b"} Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.309648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerStarted","Data":"334c56def41aa7ecf8258fe665cfce8f28c97bc17f411c445adfcddf7c04c57f"} Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.313612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerStarted","Data":"35f5e11180df8d5ef0391e3e1e8dfb4e95302fbbfd362c0bee50f227bb2d3f7d"} Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.356056 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfx4h" podStartSLOduration=2.530900391 podStartE2EDuration="1m0.356034628s" podCreationTimestamp="2026-01-23 13:35:12 +0000 UTC" firstStartedPulling="2026-01-23 13:35:14.085432822 +0000 UTC m=+155.107970447" lastFinishedPulling="2026-01-23 13:36:11.910567039 +0000 UTC m=+212.933104684" observedRunningTime="2026-01-23 13:36:12.351303905 +0000 UTC m=+213.373841550" watchObservedRunningTime="2026-01-23 13:36:12.356034628 +0000 UTC m=+213.378572263" Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.574685 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.672695 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access\") pod \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.672900 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir\") pod \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\" (UID: \"0b77c97d-c7e0-47fc-9467-509fddc76a7f\") " Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.673062 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b77c97d-c7e0-47fc-9467-509fddc76a7f" (UID: "0b77c97d-c7e0-47fc-9467-509fddc76a7f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.679700 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b77c97d-c7e0-47fc-9467-509fddc76a7f" (UID: "0b77c97d-c7e0-47fc-9467-509fddc76a7f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.775342 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:12 crc kubenswrapper[4771]: I0123 13:36:12.775442 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b77c97d-c7e0-47fc-9467-509fddc76a7f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.227426 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.238532 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.322688 4771 generic.go:334] "Generic (PLEG): container finished" podID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerID="334c56def41aa7ecf8258fe665cfce8f28c97bc17f411c445adfcddf7c04c57f" exitCode=0 Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.322777 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerDied","Data":"334c56def41aa7ecf8258fe665cfce8f28c97bc17f411c445adfcddf7c04c57f"} Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.325259 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.325251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"0b77c97d-c7e0-47fc-9467-509fddc76a7f","Type":"ContainerDied","Data":"feab42ae4a08dbc6b8bda3a92690913d7fc1fc234552d64ebac014eb7536df8c"} Jan 23 13:36:13 crc kubenswrapper[4771]: I0123 13:36:13.325328 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feab42ae4a08dbc6b8bda3a92690913d7fc1fc234552d64ebac014eb7536df8c" Jan 23 13:36:14 crc kubenswrapper[4771]: I0123 13:36:14.296184 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vfx4h" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="registry-server" probeResult="failure" output=< Jan 23 13:36:14 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 13:36:14 crc kubenswrapper[4771]: > Jan 23 13:36:14 crc kubenswrapper[4771]: I0123 13:36:14.331368 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerStarted","Data":"2a3d3ef0ca46ba2046aa2bd5a1f9260ee13164c5bd938fc61cc6db066f1c0275"} Jan 23 13:36:14 crc kubenswrapper[4771]: I0123 13:36:14.350032 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksd87" podStartSLOduration=2.621038048 podStartE2EDuration="1m1.350011321s" podCreationTimestamp="2026-01-23 13:35:13 +0000 UTC" firstStartedPulling="2026-01-23 13:35:15.00205551 +0000 UTC m=+156.024593135" lastFinishedPulling="2026-01-23 13:36:13.731028783 +0000 UTC m=+214.753566408" observedRunningTime="2026-01-23 13:36:14.348798411 +0000 UTC m=+215.371336036" watchObservedRunningTime="2026-01-23 13:36:14.350011321 +0000 UTC m=+215.372548946" Jan 23 13:36:16 crc kubenswrapper[4771]: I0123 13:36:16.345826 4771 generic.go:334] "Generic (PLEG): container finished" podID="916553c9-819c-453c-b2f1-31529bba6bef" containerID="8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f" exitCode=0 Jan 23 13:36:16 crc kubenswrapper[4771]: I0123 13:36:16.345895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerDied","Data":"8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f"} Jan 23 13:36:17 crc kubenswrapper[4771]: I0123 13:36:17.355763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerStarted","Data":"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7"} Jan 23 13:36:21 crc kubenswrapper[4771]: I0123 13:36:21.255266 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmpvn" podStartSLOduration=7.710044515 podStartE2EDuration="1m6.255247344s" podCreationTimestamp="2026-01-23 13:35:15 +0000 UTC" firstStartedPulling="2026-01-23 13:35:18.171600919 +0000 UTC m=+159.194138544" lastFinishedPulling="2026-01-23 13:36:16.716803748 +0000 UTC m=+217.739341373" observedRunningTime="2026-01-23 13:36:17.387937505 +0000 UTC m=+218.410475140" watchObservedRunningTime="2026-01-23 13:36:21.255247344 +0000 UTC m=+222.277784979" Jan 23 13:36:23 crc kubenswrapper[4771]: I0123 13:36:23.605487 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:23 crc kubenswrapper[4771]: I0123 13:36:23.605823 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:23 crc kubenswrapper[4771]: I0123 13:36:23.918348 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:23 crc kubenswrapper[4771]: I0123 13:36:23.920382 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:23 crc kubenswrapper[4771]: I0123 13:36:23.962824 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:24 crc kubenswrapper[4771]: I0123 13:36:24.429976 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.401542 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerStarted","Data":"d4a45e9b13664bd22a5d1aa244cd6261b66b688a5a865559d07ade0ea0b183b0"} Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.403927 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerStarted","Data":"6dc54a088043ccf5e0b034a2492b8fb46cf469668ef87e6e505ccd81b5232320"} Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.407429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerStarted","Data":"9b5300c6fb3d32f600dcac3150a481db7e108b7630736dd2e916873cd50a3ae5"} Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.409585 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerStarted","Data":"64d0d919d254e06d6534b2d863f9349a9a74b3d41daacecc6af1449f82a49b86"} Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.411423 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerStarted","Data":"6dfdb251d0ff2bc2fd972dc9a9338a5b4e894d446cac955151561b0c77cc37ca"} Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.464193 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.794057 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.794128 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:25 crc kubenswrapper[4771]: I0123 13:36:25.867980 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.420046 4771 generic.go:334] "Generic (PLEG): container finished" podID="87561a78-e043-4586-be7e-1d25dcf42382" containerID="9b5300c6fb3d32f600dcac3150a481db7e108b7630736dd2e916873cd50a3ae5" exitCode=0 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.420118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerDied","Data":"9b5300c6fb3d32f600dcac3150a481db7e108b7630736dd2e916873cd50a3ae5"} Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.423063 4771 generic.go:334] "Generic (PLEG): container finished" podID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerID="64d0d919d254e06d6534b2d863f9349a9a74b3d41daacecc6af1449f82a49b86" exitCode=0 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.423131 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerDied","Data":"64d0d919d254e06d6534b2d863f9349a9a74b3d41daacecc6af1449f82a49b86"} Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.425397 4771 generic.go:334] "Generic (PLEG): container finished" podID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerID="6dfdb251d0ff2bc2fd972dc9a9338a5b4e894d446cac955151561b0c77cc37ca" exitCode=0 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.425519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerDied","Data":"6dfdb251d0ff2bc2fd972dc9a9338a5b4e894d446cac955151561b0c77cc37ca"} Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.427647 4771 generic.go:334] "Generic (PLEG): container finished" podID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerID="d4a45e9b13664bd22a5d1aa244cd6261b66b688a5a865559d07ade0ea0b183b0" exitCode=0 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.427705 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerDied","Data":"d4a45e9b13664bd22a5d1aa244cd6261b66b688a5a865559d07ade0ea0b183b0"} Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.431149 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerID="6dc54a088043ccf5e0b034a2492b8fb46cf469668ef87e6e505ccd81b5232320" exitCode=0 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.431192 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerDied","Data":"6dc54a088043ccf5e0b034a2492b8fb46cf469668ef87e6e505ccd81b5232320"} Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.431357 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksd87" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="registry-server" containerID="cri-o://2a3d3ef0ca46ba2046aa2bd5a1f9260ee13164c5bd938fc61cc6db066f1c0275" gracePeriod=2 Jan 23 13:36:26 crc kubenswrapper[4771]: I0123 13:36:26.518254 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.440437 4771 generic.go:334] "Generic (PLEG): container finished" podID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerID="2a3d3ef0ca46ba2046aa2bd5a1f9260ee13164c5bd938fc61cc6db066f1c0275" exitCode=0 Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.440524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerDied","Data":"2a3d3ef0ca46ba2046aa2bd5a1f9260ee13164c5bd938fc61cc6db066f1c0275"} Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.708102 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.786397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities\") pod \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.787610 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkt98\" (UniqueName: \"kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98\") pod \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.788396 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities" (OuterVolumeSpecName: "utilities") pod "f132f5bb-4325-44e2-9fa6-6b240a1adb31" (UID: "f132f5bb-4325-44e2-9fa6-6b240a1adb31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.788599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content\") pod \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\" (UID: \"f132f5bb-4325-44e2-9fa6-6b240a1adb31\") " Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.789108 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.795605 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98" (OuterVolumeSpecName: "kube-api-access-dkt98") pod "f132f5bb-4325-44e2-9fa6-6b240a1adb31" (UID: "f132f5bb-4325-44e2-9fa6-6b240a1adb31"). InnerVolumeSpecName "kube-api-access-dkt98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.876784 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f132f5bb-4325-44e2-9fa6-6b240a1adb31" (UID: "f132f5bb-4325-44e2-9fa6-6b240a1adb31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.891282 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkt98\" (UniqueName: \"kubernetes.io/projected/f132f5bb-4325-44e2-9fa6-6b240a1adb31-kube-api-access-dkt98\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:27 crc kubenswrapper[4771]: I0123 13:36:27.891335 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f132f5bb-4325-44e2-9fa6-6b240a1adb31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.448476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerStarted","Data":"8ffa6d274dff3ea8695025b029b3d2fe52defcc1b897fff9ce47f8a67f62aeb3"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.451887 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerStarted","Data":"7d8e423beb55ae62571a48031c91fa6e19894dcf7054e2ea27a7e55432f5e1ff"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.454605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerStarted","Data":"75b1592ba986fac5be553b5748ee9e6360332ce5507bcb6d9358149bd1e90428"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.456839 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerStarted","Data":"17e0a3dd9633574a7c8541154b05fd75ce2a6387c58d03ced7f6883440282a2a"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.459129 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerStarted","Data":"2f953cfa0fd16c8353600b3201b6e24bc9e6420c05262133eaa9d5f53eb7b01f"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.461735 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksd87" event={"ID":"f132f5bb-4325-44e2-9fa6-6b240a1adb31","Type":"ContainerDied","Data":"4e36110ea71231d52c4a663a852a75edece1c9e04f607f57887a301544d9e46e"} Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.461765 4771 scope.go:117] "RemoveContainer" containerID="2a3d3ef0ca46ba2046aa2bd5a1f9260ee13164c5bd938fc61cc6db066f1c0275" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.461860 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksd87" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.471638 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-622jl" podStartSLOduration=2.753830068 podStartE2EDuration="1m15.471621214s" podCreationTimestamp="2026-01-23 13:35:13 +0000 UTC" firstStartedPulling="2026-01-23 13:35:15.006740081 +0000 UTC m=+156.029277706" lastFinishedPulling="2026-01-23 13:36:27.724531227 +0000 UTC m=+228.747068852" observedRunningTime="2026-01-23 13:36:28.470040013 +0000 UTC m=+229.492577638" watchObservedRunningTime="2026-01-23 13:36:28.471621214 +0000 UTC m=+229.494158839" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.487829 4771 scope.go:117] "RemoveContainer" containerID="334c56def41aa7ecf8258fe665cfce8f28c97bc17f411c445adfcddf7c04c57f" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.509491 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vlbq" podStartSLOduration=3.56185116 podStartE2EDuration="1m15.509465891s" podCreationTimestamp="2026-01-23 13:35:13 +0000 UTC" firstStartedPulling="2026-01-23 13:35:15.015178154 +0000 UTC m=+156.037715779" lastFinishedPulling="2026-01-23 13:36:26.962792885 +0000 UTC m=+227.985330510" observedRunningTime="2026-01-23 13:36:28.492215762 +0000 UTC m=+229.514753397" watchObservedRunningTime="2026-01-23 13:36:28.509465891 +0000 UTC m=+229.532003516" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.511267 4771 scope.go:117] "RemoveContainer" containerID="0b5f307e13bbe5d0c3916b9739dd67f0aaf1f4a38f5c45bedc8f49a86d056aff" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.533031 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zfpkr" podStartSLOduration=3.235001972 podStartE2EDuration="1m12.533007993s" podCreationTimestamp="2026-01-23 13:35:16 +0000 UTC" firstStartedPulling="2026-01-23 13:35:18.199604346 +0000 UTC m=+159.222141981" lastFinishedPulling="2026-01-23 13:36:27.497610377 +0000 UTC m=+228.520148002" observedRunningTime="2026-01-23 13:36:28.510319428 +0000 UTC m=+229.532857043" watchObservedRunningTime="2026-01-23 13:36:28.533007993 +0000 UTC m=+229.555545618" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.534130 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vk9j6" podStartSLOduration=2.147449641 podStartE2EDuration="1m13.534120599s" podCreationTimestamp="2026-01-23 13:35:15 +0000 UTC" firstStartedPulling="2026-01-23 13:35:16.039223977 +0000 UTC m=+157.061761602" lastFinishedPulling="2026-01-23 13:36:27.425894935 +0000 UTC m=+228.448432560" observedRunningTime="2026-01-23 13:36:28.532182226 +0000 UTC m=+229.554719851" watchObservedRunningTime="2026-01-23 13:36:28.534120599 +0000 UTC m=+229.556658224" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.571646 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lgxnf" podStartSLOduration=3.845967171 podStartE2EDuration="1m12.571624594s" podCreationTimestamp="2026-01-23 13:35:16 +0000 UTC" firstStartedPulling="2026-01-23 13:35:18.20960568 +0000 UTC m=+159.232143305" lastFinishedPulling="2026-01-23 13:36:26.935263103 +0000 UTC m=+227.957800728" observedRunningTime="2026-01-23 13:36:28.567606493 +0000 UTC m=+229.590144118" watchObservedRunningTime="2026-01-23 13:36:28.571624594 +0000 UTC m=+229.594162219" Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.587314 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:36:28 crc kubenswrapper[4771]: I0123 13:36:28.598441 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksd87"] Jan 23 13:36:29 crc kubenswrapper[4771]: I0123 13:36:29.237105 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" path="/var/lib/kubelet/pods/f132f5bb-4325-44e2-9fa6-6b240a1adb31/volumes" Jan 23 13:36:29 crc kubenswrapper[4771]: I0123 13:36:29.865192 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:36:29 crc kubenswrapper[4771]: I0123 13:36:29.865578 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xmpvn" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="registry-server" containerID="cri-o://c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7" gracePeriod=2 Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.332051 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.477590 4771 generic.go:334] "Generic (PLEG): container finished" podID="916553c9-819c-453c-b2f1-31529bba6bef" containerID="c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7" exitCode=0 Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.477653 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerDied","Data":"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7"} Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.477696 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmpvn" event={"ID":"916553c9-819c-453c-b2f1-31529bba6bef","Type":"ContainerDied","Data":"753c2420c0ec3c158ba703c8995fd1421098c28ab2824e8ff06b902d9aaadc8c"} Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.477702 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmpvn" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.477718 4771 scope.go:117] "RemoveContainer" containerID="c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.501270 4771 scope.go:117] "RemoveContainer" containerID="8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.517264 4771 scope.go:117] "RemoveContainer" containerID="f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.523079 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-994ss\" (UniqueName: \"kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss\") pod \"916553c9-819c-453c-b2f1-31529bba6bef\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.523181 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities\") pod \"916553c9-819c-453c-b2f1-31529bba6bef\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.523292 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content\") pod \"916553c9-819c-453c-b2f1-31529bba6bef\" (UID: \"916553c9-819c-453c-b2f1-31529bba6bef\") " Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.524330 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities" (OuterVolumeSpecName: "utilities") pod "916553c9-819c-453c-b2f1-31529bba6bef" (UID: "916553c9-819c-453c-b2f1-31529bba6bef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.533388 4771 scope.go:117] "RemoveContainer" containerID="c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7" Jan 23 13:36:30 crc kubenswrapper[4771]: E0123 13:36:30.533755 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7\": container with ID starting with c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7 not found: ID does not exist" containerID="c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.533805 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7"} err="failed to get container status \"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7\": rpc error: code = NotFound desc = could not find container \"c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7\": container with ID starting with c51f7cd82bf94de00b3087b2802ebe587e956d3551fc5790cc41440cc3dccfd7 not found: ID does not exist" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.533837 4771 scope.go:117] "RemoveContainer" containerID="8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f" Jan 23 13:36:30 crc kubenswrapper[4771]: E0123 13:36:30.534101 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f\": container with ID starting with 8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f not found: ID does not exist" containerID="8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.534131 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f"} err="failed to get container status \"8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f\": rpc error: code = NotFound desc = could not find container \"8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f\": container with ID starting with 8ac84321208092807e227b013cf27cbc2610d0291ad0f69b3842fabee6476e4f not found: ID does not exist" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.534152 4771 scope.go:117] "RemoveContainer" containerID="f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8" Jan 23 13:36:30 crc kubenswrapper[4771]: E0123 13:36:30.534337 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8\": container with ID starting with f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8 not found: ID does not exist" containerID="f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.534366 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8"} err="failed to get container status \"f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8\": rpc error: code = NotFound desc = could not find container \"f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8\": container with ID starting with f2a835b17b1d88755ed0ede5948b63822e0751642f59b136040db4145ccdadf8 not found: ID does not exist" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.536565 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss" (OuterVolumeSpecName: "kube-api-access-994ss") pod "916553c9-819c-453c-b2f1-31529bba6bef" (UID: "916553c9-819c-453c-b2f1-31529bba6bef"). InnerVolumeSpecName "kube-api-access-994ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.552839 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "916553c9-819c-453c-b2f1-31529bba6bef" (UID: "916553c9-819c-453c-b2f1-31529bba6bef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.625582 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.625627 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-994ss\" (UniqueName: \"kubernetes.io/projected/916553c9-819c-453c-b2f1-31529bba6bef-kube-api-access-994ss\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.625643 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/916553c9-819c-453c-b2f1-31529bba6bef-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.805631 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:36:30 crc kubenswrapper[4771]: I0123 13:36:30.814972 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmpvn"] Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.235306 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="916553c9-819c-453c-b2f1-31529bba6bef" path="/var/lib/kubelet/pods/916553c9-819c-453c-b2f1-31529bba6bef/volumes" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708262 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pljhw"] Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708536 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="extract-utilities" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708552 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="extract-utilities" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708564 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="extract-content" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708571 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="extract-content" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708584 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708593 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708606 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="extract-content" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708612 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="extract-content" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708624 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708630 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708639 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b77c97d-c7e0-47fc-9467-509fddc76a7f" containerName="pruner" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708645 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b77c97d-c7e0-47fc-9467-509fddc76a7f" containerName="pruner" Jan 23 13:36:31 crc kubenswrapper[4771]: E0123 13:36:31.708656 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="extract-utilities" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708664 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="extract-utilities" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708771 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b77c97d-c7e0-47fc-9467-509fddc76a7f" containerName="pruner" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708783 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f132f5bb-4325-44e2-9fa6-6b240a1adb31" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.708798 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="916553c9-819c-453c-b2f1-31529bba6bef" containerName="registry-server" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.709226 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.722209 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pljhw"] Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.839955 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-bound-sa-token\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-certificates\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840039 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840061 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/220369ff-19c1-4d3d-9f30-8b46ba83b630-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840105 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-trusted-ca\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840131 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7sh\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-kube-api-access-7c7sh\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840281 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-tls\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.840342 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/220369ff-19c1-4d3d-9f30-8b46ba83b630-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.872043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941815 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-trusted-ca\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c7sh\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-kube-api-access-7c7sh\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941906 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-tls\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941930 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/220369ff-19c1-4d3d-9f30-8b46ba83b630-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941946 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-bound-sa-token\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941973 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-certificates\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.941992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/220369ff-19c1-4d3d-9f30-8b46ba83b630-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.942596 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/220369ff-19c1-4d3d-9f30-8b46ba83b630-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.943767 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-certificates\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.943878 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/220369ff-19c1-4d3d-9f30-8b46ba83b630-trusted-ca\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.946343 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-registry-tls\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.951857 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/220369ff-19c1-4d3d-9f30-8b46ba83b630-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.959001 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-bound-sa-token\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:31 crc kubenswrapper[4771]: I0123 13:36:31.959396 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c7sh\" (UniqueName: \"kubernetes.io/projected/220369ff-19c1-4d3d-9f30-8b46ba83b630-kube-api-access-7c7sh\") pod \"image-registry-66df7c8f76-pljhw\" (UID: \"220369ff-19c1-4d3d-9f30-8b46ba83b630\") " pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:32 crc kubenswrapper[4771]: I0123 13:36:32.023985 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:32 crc kubenswrapper[4771]: I0123 13:36:32.464088 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pljhw"] Jan 23 13:36:32 crc kubenswrapper[4771]: W0123 13:36:32.476745 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod220369ff_19c1_4d3d_9f30_8b46ba83b630.slice/crio-d8177fc74725ea76f064cba97a851e0f65d87fbed6ddf6c7f7495061fb4db22b WatchSource:0}: Error finding container d8177fc74725ea76f064cba97a851e0f65d87fbed6ddf6c7f7495061fb4db22b: Status 404 returned error can't find the container with id d8177fc74725ea76f064cba97a851e0f65d87fbed6ddf6c7f7495061fb4db22b Jan 23 13:36:32 crc kubenswrapper[4771]: I0123 13:36:32.489048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" event={"ID":"220369ff-19c1-4d3d-9f30-8b46ba83b630","Type":"ContainerStarted","Data":"d8177fc74725ea76f064cba97a851e0f65d87fbed6ddf6c7f7495061fb4db22b"} Jan 23 13:36:33 crc kubenswrapper[4771]: I0123 13:36:33.421276 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:36:33 crc kubenswrapper[4771]: I0123 13:36:33.421755 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:36:33 crc kubenswrapper[4771]: I0123 13:36:33.870189 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:33 crc kubenswrapper[4771]: I0123 13:36:33.870327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:33 crc kubenswrapper[4771]: I0123 13:36:33.906609 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:34 crc kubenswrapper[4771]: I0123 13:36:34.463772 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-622jl" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="registry-server" probeResult="failure" output=< Jan 23 13:36:34 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 13:36:34 crc kubenswrapper[4771]: > Jan 23 13:36:34 crc kubenswrapper[4771]: I0123 13:36:34.542313 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:34 crc kubenswrapper[4771]: I0123 13:36:34.863985 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.377879 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.379726 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.418853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.509896 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" event={"ID":"220369ff-19c1-4d3d-9f30-8b46ba83b630","Type":"ContainerStarted","Data":"2c1f4e2370fd7d57c21b3b1e2d28c939c2336b297a5443bbd74772b6339e871e"} Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.584345 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:35 crc kubenswrapper[4771]: I0123 13:36:35.606040 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" podStartSLOduration=4.606005217 podStartE2EDuration="4.606005217s" podCreationTimestamp="2026-01-23 13:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:36:35.558333054 +0000 UTC m=+236.580870689" watchObservedRunningTime="2026-01-23 13:36:35.606005217 +0000 UTC m=+236.628542842" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.438984 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.439046 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.482728 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.515071 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.515306 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vlbq" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="registry-server" containerID="cri-o://75b1592ba986fac5be553b5748ee9e6360332ce5507bcb6d9358149bd1e90428" gracePeriod=2 Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.560395 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.867433 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.867831 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:36 crc kubenswrapper[4771]: I0123 13:36:36.912503 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:37 crc kubenswrapper[4771]: I0123 13:36:37.523792 4771 generic.go:334] "Generic (PLEG): container finished" podID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerID="75b1592ba986fac5be553b5748ee9e6360332ce5507bcb6d9358149bd1e90428" exitCode=0 Jan 23 13:36:37 crc kubenswrapper[4771]: I0123 13:36:37.523954 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerDied","Data":"75b1592ba986fac5be553b5748ee9e6360332ce5507bcb6d9358149bd1e90428"} Jan 23 13:36:37 crc kubenswrapper[4771]: I0123 13:36:37.570853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.375360 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.435883 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities\") pod \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.436042 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6chdr\" (UniqueName: \"kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr\") pod \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.436131 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content\") pod \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\" (UID: \"b9ec392b-4022-454f-ba4b-1a4d4d2edd87\") " Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.436719 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities" (OuterVolumeSpecName: "utilities") pod "b9ec392b-4022-454f-ba4b-1a4d4d2edd87" (UID: "b9ec392b-4022-454f-ba4b-1a4d4d2edd87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.442550 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr" (OuterVolumeSpecName: "kube-api-access-6chdr") pod "b9ec392b-4022-454f-ba4b-1a4d4d2edd87" (UID: "b9ec392b-4022-454f-ba4b-1a4d4d2edd87"). InnerVolumeSpecName "kube-api-access-6chdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.531404 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vlbq" event={"ID":"b9ec392b-4022-454f-ba4b-1a4d4d2edd87","Type":"ContainerDied","Data":"1fdf096ff04f5e4272563369bd7a3e63bdc11c30d6aeae06258976c660ac297d"} Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.531504 4771 scope.go:117] "RemoveContainer" containerID="75b1592ba986fac5be553b5748ee9e6360332ce5507bcb6d9358149bd1e90428" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.531523 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vlbq" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.538185 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.538214 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6chdr\" (UniqueName: \"kubernetes.io/projected/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-kube-api-access-6chdr\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.548859 4771 scope.go:117] "RemoveContainer" containerID="d4a45e9b13664bd22a5d1aa244cd6261b66b688a5a865559d07ade0ea0b183b0" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.577393 4771 scope.go:117] "RemoveContainer" containerID="dc98070498686ec4bbfdaba011f96a9befb97e3073c805c46c9c30a26e9b5d33" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.631467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ec392b-4022-454f-ba4b-1a4d4d2edd87" (UID: "b9ec392b-4022-454f-ba4b-1a4d4d2edd87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.639717 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ec392b-4022-454f-ba4b-1a4d4d2edd87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.848578 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c59wl"] Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.877583 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:36:38 crc kubenswrapper[4771]: I0123 13:36:38.899597 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vlbq"] Jan 23 13:36:39 crc kubenswrapper[4771]: I0123 13:36:39.065361 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:36:39 crc kubenswrapper[4771]: I0123 13:36:39.255219 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" path="/var/lib/kubelet/pods/b9ec392b-4022-454f-ba4b-1a4d4d2edd87/volumes" Jan 23 13:36:39 crc kubenswrapper[4771]: I0123 13:36:39.535928 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zfpkr" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="registry-server" containerID="cri-o://2f953cfa0fd16c8353600b3201b6e24bc9e6420c05262133eaa9d5f53eb7b01f" gracePeriod=2 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.810338 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.812078 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-622jl" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="registry-server" containerID="cri-o://8ffa6d274dff3ea8695025b029b3d2fe52defcc1b897fff9ce47f8a67f62aeb3" gracePeriod=30 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.836546 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.836974 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vfx4h" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="registry-server" containerID="cri-o://35f5e11180df8d5ef0391e3e1e8dfb4e95302fbbfd362c0bee50f227bb2d3f7d" gracePeriod=30 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.849533 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.849883 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" containerName="marketplace-operator" containerID="cri-o://1f2ad5b63257777b96b213525665ee3320bc2bdf4f28bd0320627723fc22cd52" gracePeriod=30 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.861186 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.861666 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vk9j6" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="registry-server" containerID="cri-o://17e0a3dd9633574a7c8541154b05fd75ce2a6387c58d03ced7f6883440282a2a" gracePeriod=30 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.869014 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tljgp"] Jan 23 13:36:41 crc kubenswrapper[4771]: E0123 13:36:41.869360 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="extract-utilities" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.869382 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="extract-utilities" Jan 23 13:36:41 crc kubenswrapper[4771]: E0123 13:36:41.869422 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="extract-content" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.869433 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="extract-content" Jan 23 13:36:41 crc kubenswrapper[4771]: E0123 13:36:41.869455 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="registry-server" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.869464 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="registry-server" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.869589 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ec392b-4022-454f-ba4b-1a4d4d2edd87" containerName="registry-server" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.870116 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.874940 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.875284 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lgxnf" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="registry-server" containerID="cri-o://7d8e423beb55ae62571a48031c91fa6e19894dcf7054e2ea27a7e55432f5e1ff" gracePeriod=30 Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.876475 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tljgp"] Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.998658 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.998722 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:41 crc kubenswrapper[4771]: I0123 13:36:41.998765 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt65g\" (UniqueName: \"kubernetes.io/projected/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-kube-api-access-vt65g\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.100188 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.100247 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.100274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt65g\" (UniqueName: \"kubernetes.io/projected/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-kube-api-access-vt65g\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.102545 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.108172 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.120795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt65g\" (UniqueName: \"kubernetes.io/projected/9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a-kube-api-access-vt65g\") pod \"marketplace-operator-79b997595-tljgp\" (UID: \"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a\") " pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.188563 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.553077 4771 generic.go:334] "Generic (PLEG): container finished" podID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerID="7d8e423beb55ae62571a48031c91fa6e19894dcf7054e2ea27a7e55432f5e1ff" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.553145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerDied","Data":"7d8e423beb55ae62571a48031c91fa6e19894dcf7054e2ea27a7e55432f5e1ff"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.555252 4771 generic.go:334] "Generic (PLEG): container finished" podID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerID="35f5e11180df8d5ef0391e3e1e8dfb4e95302fbbfd362c0bee50f227bb2d3f7d" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.555311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerDied","Data":"35f5e11180df8d5ef0391e3e1e8dfb4e95302fbbfd362c0bee50f227bb2d3f7d"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.557952 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerID="17e0a3dd9633574a7c8541154b05fd75ce2a6387c58d03ced7f6883440282a2a" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.557994 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerDied","Data":"17e0a3dd9633574a7c8541154b05fd75ce2a6387c58d03ced7f6883440282a2a"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.560112 4771 generic.go:334] "Generic (PLEG): container finished" podID="87561a78-e043-4586-be7e-1d25dcf42382" containerID="2f953cfa0fd16c8353600b3201b6e24bc9e6420c05262133eaa9d5f53eb7b01f" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.560160 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerDied","Data":"2f953cfa0fd16c8353600b3201b6e24bc9e6420c05262133eaa9d5f53eb7b01f"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.561471 4771 generic.go:334] "Generic (PLEG): container finished" podID="ff39a4f5-5820-481f-9209-08004e3e5280" containerID="1f2ad5b63257777b96b213525665ee3320bc2bdf4f28bd0320627723fc22cd52" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.561520 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" event={"ID":"ff39a4f5-5820-481f-9209-08004e3e5280","Type":"ContainerDied","Data":"1f2ad5b63257777b96b213525665ee3320bc2bdf4f28bd0320627723fc22cd52"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.563189 4771 generic.go:334] "Generic (PLEG): container finished" podID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerID="8ffa6d274dff3ea8695025b029b3d2fe52defcc1b897fff9ce47f8a67f62aeb3" exitCode=0 Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.563215 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerDied","Data":"8ffa6d274dff3ea8695025b029b3d2fe52defcc1b897fff9ce47f8a67f62aeb3"} Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.592535 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tljgp"] Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.611269 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.707315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content\") pod \"87561a78-e043-4586-be7e-1d25dcf42382\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.708105 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities\") pod \"87561a78-e043-4586-be7e-1d25dcf42382\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.708246 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6rr\" (UniqueName: \"kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr\") pod \"87561a78-e043-4586-be7e-1d25dcf42382\" (UID: \"87561a78-e043-4586-be7e-1d25dcf42382\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.709958 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities" (OuterVolumeSpecName: "utilities") pod "87561a78-e043-4586-be7e-1d25dcf42382" (UID: "87561a78-e043-4586-be7e-1d25dcf42382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.721493 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr" (OuterVolumeSpecName: "kube-api-access-dj6rr") pod "87561a78-e043-4586-be7e-1d25dcf42382" (UID: "87561a78-e043-4586-be7e-1d25dcf42382"). InnerVolumeSpecName "kube-api-access-dj6rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.796570 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.809866 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6rr\" (UniqueName: \"kubernetes.io/projected/87561a78-e043-4586-be7e-1d25dcf42382-kube-api-access-dj6rr\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.809920 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.913953 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities\") pod \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.914028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content\") pod \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.914080 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjwjj\" (UniqueName: \"kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj\") pod \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\" (UID: \"a8fac1eb-9145-43c5-83c8-bda72dae51d5\") " Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.916616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities" (OuterVolumeSpecName: "utilities") pod "a8fac1eb-9145-43c5-83c8-bda72dae51d5" (UID: "a8fac1eb-9145-43c5-83c8-bda72dae51d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.930876 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj" (OuterVolumeSpecName: "kube-api-access-xjwjj") pod "a8fac1eb-9145-43c5-83c8-bda72dae51d5" (UID: "a8fac1eb-9145-43c5-83c8-bda72dae51d5"). InnerVolumeSpecName "kube-api-access-xjwjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.949147 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87561a78-e043-4586-be7e-1d25dcf42382" (UID: "87561a78-e043-4586-be7e-1d25dcf42382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:42 crc kubenswrapper[4771]: I0123 13:36:42.995382 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8fac1eb-9145-43c5-83c8-bda72dae51d5" (UID: "a8fac1eb-9145-43c5-83c8-bda72dae51d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.015579 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.015617 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87561a78-e043-4586-be7e-1d25dcf42382-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.015629 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fac1eb-9145-43c5-83c8-bda72dae51d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.015639 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjwjj\" (UniqueName: \"kubernetes.io/projected/a8fac1eb-9145-43c5-83c8-bda72dae51d5-kube-api-access-xjwjj\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.026812 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.034642 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.045499 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.054504 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.117903 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities\") pod \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118016 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content\") pod \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfg4c\" (UniqueName: \"kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c\") pod \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118070 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtfk6\" (UniqueName: \"kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6\") pod \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118098 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content\") pod \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118158 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca\") pod \"ff39a4f5-5820-481f-9209-08004e3e5280\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118211 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkw52\" (UniqueName: \"kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52\") pod \"ff39a4f5-5820-481f-9209-08004e3e5280\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psr2q\" (UniqueName: \"kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q\") pod \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118277 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities\") pod \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\" (UID: \"91fa80db-fb28-4a7e-a93d-b1213f843dc1\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118312 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities\") pod \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\" (UID: \"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118349 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content\") pod \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\" (UID: \"1e885f8a-42cc-49ad-9b52-759c9adb8ad7\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.118370 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics\") pod \"ff39a4f5-5820-481f-9209-08004e3e5280\" (UID: \"ff39a4f5-5820-481f-9209-08004e3e5280\") " Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.119300 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ff39a4f5-5820-481f-9209-08004e3e5280" (UID: "ff39a4f5-5820-481f-9209-08004e3e5280"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.120285 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities" (OuterVolumeSpecName: "utilities") pod "91fa80db-fb28-4a7e-a93d-b1213f843dc1" (UID: "91fa80db-fb28-4a7e-a93d-b1213f843dc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.120873 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities" (OuterVolumeSpecName: "utilities") pod "1e885f8a-42cc-49ad-9b52-759c9adb8ad7" (UID: "1e885f8a-42cc-49ad-9b52-759c9adb8ad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.122382 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities" (OuterVolumeSpecName: "utilities") pod "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" (UID: "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.139964 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c" (OuterVolumeSpecName: "kube-api-access-dfg4c") pod "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" (UID: "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9"). InnerVolumeSpecName "kube-api-access-dfg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.140021 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52" (OuterVolumeSpecName: "kube-api-access-tkw52") pod "ff39a4f5-5820-481f-9209-08004e3e5280" (UID: "ff39a4f5-5820-481f-9209-08004e3e5280"). InnerVolumeSpecName "kube-api-access-tkw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.140709 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ff39a4f5-5820-481f-9209-08004e3e5280" (UID: "ff39a4f5-5820-481f-9209-08004e3e5280"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.140939 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6" (OuterVolumeSpecName: "kube-api-access-xtfk6") pod "1e885f8a-42cc-49ad-9b52-759c9adb8ad7" (UID: "1e885f8a-42cc-49ad-9b52-759c9adb8ad7"). InnerVolumeSpecName "kube-api-access-xtfk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.143320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q" (OuterVolumeSpecName: "kube-api-access-psr2q") pod "91fa80db-fb28-4a7e-a93d-b1213f843dc1" (UID: "91fa80db-fb28-4a7e-a93d-b1213f843dc1"). InnerVolumeSpecName "kube-api-access-psr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.150501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e885f8a-42cc-49ad-9b52-759c9adb8ad7" (UID: "1e885f8a-42cc-49ad-9b52-759c9adb8ad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.185890 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" (UID: "330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.219974 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psr2q\" (UniqueName: \"kubernetes.io/projected/91fa80db-fb28-4a7e-a93d-b1213f843dc1-kube-api-access-psr2q\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220012 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220024 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220033 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220043 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220053 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220064 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220072 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfg4c\" (UniqueName: \"kubernetes.io/projected/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9-kube-api-access-dfg4c\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220080 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtfk6\" (UniqueName: \"kubernetes.io/projected/1e885f8a-42cc-49ad-9b52-759c9adb8ad7-kube-api-access-xtfk6\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220090 4771 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff39a4f5-5820-481f-9209-08004e3e5280-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.220097 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkw52\" (UniqueName: \"kubernetes.io/projected/ff39a4f5-5820-481f-9209-08004e3e5280-kube-api-access-tkw52\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.249156 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91fa80db-fb28-4a7e-a93d-b1213f843dc1" (UID: "91fa80db-fb28-4a7e-a93d-b1213f843dc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.321161 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91fa80db-fb28-4a7e-a93d-b1213f843dc1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.569052 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" event={"ID":"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a","Type":"ContainerStarted","Data":"a047877aecf89bbaea81683d802d61d25c498652576f718f943eb9617eb9498a"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.569103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" event={"ID":"9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a","Type":"ContainerStarted","Data":"1f4e087690b636103a7210a7774a28db3c864637367f15cababd3862eeec09db"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.569460 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.572755 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vk9j6" event={"ID":"1e885f8a-42cc-49ad-9b52-759c9adb8ad7","Type":"ContainerDied","Data":"f5a410657b522e0322de9af3d51075444f57bf00eb7a376103aaca5e3c77e7ff"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.572985 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vk9j6" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.573014 4771 scope.go:117] "RemoveContainer" containerID="17e0a3dd9633574a7c8541154b05fd75ce2a6387c58d03ced7f6883440282a2a" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.576241 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zfpkr" event={"ID":"87561a78-e043-4586-be7e-1d25dcf42382","Type":"ContainerDied","Data":"e5afc7b6501711f3b025d6ade104ee1f6413336ebfae458891b895567f073dbb"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.576396 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zfpkr" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.579606 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.579772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fbmxq" event={"ID":"ff39a4f5-5820-481f-9209-08004e3e5280","Type":"ContainerDied","Data":"8a111ef7752f20895a9cf6115b0a8100f06ae6a401f0d6b135444e51abec3de8"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.579978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.582052 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-622jl" event={"ID":"a8fac1eb-9145-43c5-83c8-bda72dae51d5","Type":"ContainerDied","Data":"1f7b4d9545c9d2caaea62dde5382b09c52d4121abafa4e1ebddd3c2f1e907e82"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.582101 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-622jl" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.584548 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lgxnf" event={"ID":"91fa80db-fb28-4a7e-a93d-b1213f843dc1","Type":"ContainerDied","Data":"e72a8872ab9169ca66ea1f206a92b31f215d080e194ffd9ea3c6593f33c08d77"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.584741 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lgxnf" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.588073 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfx4h" event={"ID":"330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9","Type":"ContainerDied","Data":"e2c32de3a931cb47ea93fd12634853ed98b679d339d4ffd914a7c367e0f2b9aa"} Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.588138 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfx4h" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.596521 4771 scope.go:117] "RemoveContainer" containerID="6dc54a088043ccf5e0b034a2492b8fb46cf469668ef87e6e505ccd81b5232320" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.599182 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tljgp" podStartSLOduration=2.599169698 podStartE2EDuration="2.599169698s" podCreationTimestamp="2026-01-23 13:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:36:43.593937578 +0000 UTC m=+244.616475293" watchObservedRunningTime="2026-01-23 13:36:43.599169698 +0000 UTC m=+244.621707323" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.611895 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.616311 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fbmxq"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.618144 4771 scope.go:117] "RemoveContainer" containerID="3d8031716fd60e9c070afb10eb1805aa4ac8e716d30f770c46e561c452c6e0d1" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.632141 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.637185 4771 scope.go:117] "RemoveContainer" containerID="2f953cfa0fd16c8353600b3201b6e24bc9e6420c05262133eaa9d5f53eb7b01f" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.637647 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zfpkr"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.659544 4771 scope.go:117] "RemoveContainer" containerID="9b5300c6fb3d32f600dcac3150a481db7e108b7630736dd2e916873cd50a3ae5" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.663972 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.670577 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vk9j6"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.686572 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.693542 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-622jl"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.703681 4771 scope.go:117] "RemoveContainer" containerID="e038384ebab99d7b60fd225744ab2268b37b256715d14e7408e205a1ecbd66a9" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.717209 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.725194 4771 scope.go:117] "RemoveContainer" containerID="1f2ad5b63257777b96b213525665ee3320bc2bdf4f28bd0320627723fc22cd52" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.729769 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vfx4h"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.738581 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.742123 4771 scope.go:117] "RemoveContainer" containerID="8ffa6d274dff3ea8695025b029b3d2fe52defcc1b897fff9ce47f8a67f62aeb3" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.742590 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lgxnf"] Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.754097 4771 scope.go:117] "RemoveContainer" containerID="64d0d919d254e06d6534b2d863f9349a9a74b3d41daacecc6af1449f82a49b86" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.767248 4771 scope.go:117] "RemoveContainer" containerID="732b1dbdef267e136f6020990d2379aa06834bbce9387c6ee5964f13410d3e0e" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.785493 4771 scope.go:117] "RemoveContainer" containerID="7d8e423beb55ae62571a48031c91fa6e19894dcf7054e2ea27a7e55432f5e1ff" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.801394 4771 scope.go:117] "RemoveContainer" containerID="6dfdb251d0ff2bc2fd972dc9a9338a5b4e894d446cac955151561b0c77cc37ca" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.817772 4771 scope.go:117] "RemoveContainer" containerID="4c30cbbacc7b81e1a2276c964c5b0e157eeb41d214fd4c10e10e7d5dd1a3c870" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.840127 4771 scope.go:117] "RemoveContainer" containerID="35f5e11180df8d5ef0391e3e1e8dfb4e95302fbbfd362c0bee50f227bb2d3f7d" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.853352 4771 scope.go:117] "RemoveContainer" containerID="495c697dfc0ba6734fd1c349e5814963263b6cf89bed066918b395f6e57eff5b" Jan 23 13:36:43 crc kubenswrapper[4771]: I0123 13:36:43.869197 4771 scope.go:117] "RemoveContainer" containerID="f369a2c5bc53ba930a94cda8309995fb71656640670119c087fe275561103c06" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.236563 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" path="/var/lib/kubelet/pods/1e885f8a-42cc-49ad-9b52-759c9adb8ad7/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.237341 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" path="/var/lib/kubelet/pods/330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.238156 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87561a78-e043-4586-be7e-1d25dcf42382" path="/var/lib/kubelet/pods/87561a78-e043-4586-be7e-1d25dcf42382/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.240205 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" path="/var/lib/kubelet/pods/91fa80db-fb28-4a7e-a93d-b1213f843dc1/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.240921 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" path="/var/lib/kubelet/pods/a8fac1eb-9145-43c5-83c8-bda72dae51d5/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.242192 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" path="/var/lib/kubelet/pods/ff39a4f5-5820-481f-9209-08004e3e5280/volumes" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478044 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9fcfx"] Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478296 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478312 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478329 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478336 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478347 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478355 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478366 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478373 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478384 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478392 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478402 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478425 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478438 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478445 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478456 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" containerName="marketplace-operator" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478465 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" containerName="marketplace-operator" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478482 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478490 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478504 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478511 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478522 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478529 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="extract-utilities" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478540 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478547 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478556 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478563 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478574 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478582 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478590 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478599 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: E0123 13:36:45.478609 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478617 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="extract-content" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478737 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e885f8a-42cc-49ad-9b52-759c9adb8ad7" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478751 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff39a4f5-5820-481f-9209-08004e3e5280" containerName="marketplace-operator" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478763 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="330cd6a7-1942-4bf5-a7fc-b3acb8a00cf9" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478778 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="87561a78-e043-4586-be7e-1d25dcf42382" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478788 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fac1eb-9145-43c5-83c8-bda72dae51d5" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.478799 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fa80db-fb28-4a7e-a93d-b1213f843dc1" containerName="registry-server" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.479737 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.483976 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.488981 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fcfx"] Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.552943 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-catalog-content\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.553001 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-utilities\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.553068 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q97lp\" (UniqueName: \"kubernetes.io/projected/ae249f45-cd5f-4837-9b26-cd4981147454-kube-api-access-q97lp\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.654490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-catalog-content\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.654560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-utilities\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.654592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q97lp\" (UniqueName: \"kubernetes.io/projected/ae249f45-cd5f-4837-9b26-cd4981147454-kube-api-access-q97lp\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.655167 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-catalog-content\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.655257 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae249f45-cd5f-4837-9b26-cd4981147454-utilities\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.672612 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q97lp\" (UniqueName: \"kubernetes.io/projected/ae249f45-cd5f-4837-9b26-cd4981147454-kube-api-access-q97lp\") pod \"redhat-marketplace-9fcfx\" (UID: \"ae249f45-cd5f-4837-9b26-cd4981147454\") " pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:45 crc kubenswrapper[4771]: I0123 13:36:45.804434 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.076926 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ckzcq"] Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.078356 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.080289 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.090344 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckzcq"] Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.166635 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-utilities\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.166714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-catalog-content\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.166750 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxlx\" (UniqueName: \"kubernetes.io/projected/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-kube-api-access-6rxlx\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.221226 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fcfx"] Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.267791 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-utilities\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.269125 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-catalog-content\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.269255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rxlx\" (UniqueName: \"kubernetes.io/projected/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-kube-api-access-6rxlx\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.268375 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-utilities\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.270464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-catalog-content\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.313385 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rxlx\" (UniqueName: \"kubernetes.io/projected/3bc17f58-61bd-4595-8d2b-83f9c2cc4514-kube-api-access-6rxlx\") pod \"redhat-operators-ckzcq\" (UID: \"3bc17f58-61bd-4595-8d2b-83f9c2cc4514\") " pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.393792 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.584286 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckzcq"] Jan 23 13:36:46 crc kubenswrapper[4771]: W0123 13:36:46.588317 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc17f58_61bd_4595_8d2b_83f9c2cc4514.slice/crio-9d0bac8d2d990293449b762deacd68114da83e6c6d31029e9fcba101e6e8d97c WatchSource:0}: Error finding container 9d0bac8d2d990293449b762deacd68114da83e6c6d31029e9fcba101e6e8d97c: Status 404 returned error can't find the container with id 9d0bac8d2d990293449b762deacd68114da83e6c6d31029e9fcba101e6e8d97c Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.611790 4771 generic.go:334] "Generic (PLEG): container finished" podID="ae249f45-cd5f-4837-9b26-cd4981147454" containerID="744572cb493749d2955c3d9cc4e0e92c05f26e8246f4bd427bc9eefcfc9532f3" exitCode=0 Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.611892 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fcfx" event={"ID":"ae249f45-cd5f-4837-9b26-cd4981147454","Type":"ContainerDied","Data":"744572cb493749d2955c3d9cc4e0e92c05f26e8246f4bd427bc9eefcfc9532f3"} Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.611939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fcfx" event={"ID":"ae249f45-cd5f-4837-9b26-cd4981147454","Type":"ContainerStarted","Data":"50e377637104984807add93dba73e88505f84ea829e59b65a8ec6fad380f8fbc"} Jan 23 13:36:46 crc kubenswrapper[4771]: I0123 13:36:46.613704 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckzcq" event={"ID":"3bc17f58-61bd-4595-8d2b-83f9c2cc4514","Type":"ContainerStarted","Data":"9d0bac8d2d990293449b762deacd68114da83e6c6d31029e9fcba101e6e8d97c"} Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.486740 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.487913 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488080 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488273 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6" gracePeriod=15 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488328 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd" gracePeriod=15 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488388 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a" gracePeriod=15 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488441 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0" gracePeriod=15 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488297 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93" gracePeriod=15 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.488859 4771 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489039 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489060 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489072 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489080 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489089 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489096 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489116 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489122 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489132 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489138 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489147 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489154 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489252 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489266 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489275 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489285 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489294 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.489400 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489424 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.489529 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.547505 4771 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589472 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589487 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589704 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.589749 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.622899 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624022 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624619 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a" exitCode=0 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624652 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93" exitCode=0 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624664 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd" exitCode=0 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624674 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0" exitCode=2 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.624739 4771 scope.go:117] "RemoveContainer" containerID="3db8210cdaef404d0976b4b143e37b81b8426d6afd3c3f560faf384ccdd32e92" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.626907 4771 generic.go:334] "Generic (PLEG): container finished" podID="ae249f45-cd5f-4837-9b26-cd4981147454" containerID="0252d020f3456b64af7fc4a577114df14c3d6c00c4a582fdef7ea6681f5be01e" exitCode=0 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.626980 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fcfx" event={"ID":"ae249f45-cd5f-4837-9b26-cd4981147454","Type":"ContainerDied","Data":"0252d020f3456b64af7fc4a577114df14c3d6c00c4a582fdef7ea6681f5be01e"} Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.627650 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.627888 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.631111 4771 generic.go:334] "Generic (PLEG): container finished" podID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" containerID="2b17243dc76fad0ca22586f860fbaa40cef2317c045239fa554bf79044f822ea" exitCode=0 Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.631146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckzcq" event={"ID":"3bc17f58-61bd-4595-8d2b-83f9c2cc4514","Type":"ContainerDied","Data":"2b17243dc76fad0ca22586f860fbaa40cef2317c045239fa554bf79044f822ea"} Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.632311 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.632800 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:47 crc kubenswrapper[4771]: E0123 13:36:47.633016 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-9fcfx.188d5fa8e54a5a10 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-9fcfx,UID:ae249f45-cd5f-4837-9b26-cd4981147454,APIVersion:v1,ResourceVersion:29627,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 13:36:47.632472592 +0000 UTC m=+248.655010217,LastTimestamp:2026-01-23 13:36:47.632472592 +0000 UTC m=+248.655010217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.633138 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691672 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691712 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691731 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691786 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691794 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691811 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.691835 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.692076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.692248 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: I0123 13:36:47.848343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:47 crc kubenswrapper[4771]: W0123 13:36:47.869022 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-187356f97f0bd63e04d1f55700e579f4180572b9339610aa5c440584469399bd WatchSource:0}: Error finding container 187356f97f0bd63e04d1f55700e579f4180572b9339610aa5c440584469399bd: Status 404 returned error can't find the container with id 187356f97f0bd63e04d1f55700e579f4180572b9339610aa5c440584469399bd Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.641642 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckzcq" event={"ID":"3bc17f58-61bd-4595-8d2b-83f9c2cc4514","Type":"ContainerStarted","Data":"7618ae40288f4e1dfa22dd32f952cf8acb084691e00808add1276be078ab402f"} Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.642437 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.643002 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.643295 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.646500 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.649851 4771 generic.go:334] "Generic (PLEG): container finished" podID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" containerID="483053ab1cc84e5c5e7a48f0a202c95dfed0ff6fde2ec8f4216fcde8de62f28f" exitCode=0 Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.649963 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"54222a00-e7a5-4ceb-9b33-7e5a80a434c0","Type":"ContainerDied","Data":"483053ab1cc84e5c5e7a48f0a202c95dfed0ff6fde2ec8f4216fcde8de62f28f"} Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.650879 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.651482 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.651888 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f1a4be7f8aaea31e0995e9e2972414bcade7748788637cedfca2573235a6a3d8"} Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.651938 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"187356f97f0bd63e04d1f55700e579f4180572b9339610aa5c440584469399bd"} Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.652097 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.652472 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: E0123 13:36:48.652565 4771 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.653366 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.653694 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.653915 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.654191 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.655667 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fcfx" event={"ID":"ae249f45-cd5f-4837-9b26-cd4981147454","Type":"ContainerStarted","Data":"898f7c63d28327762aac4f3934d4ef77dc07da5f196e8da0e056822e39f529d7"} Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.656241 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.656588 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.656929 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:48 crc kubenswrapper[4771]: I0123 13:36:48.657542 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.230075 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.230776 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.231085 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.231552 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.667637 4771 generic.go:334] "Generic (PLEG): container finished" podID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" containerID="7618ae40288f4e1dfa22dd32f952cf8acb084691e00808add1276be078ab402f" exitCode=0 Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.667732 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckzcq" event={"ID":"3bc17f58-61bd-4595-8d2b-83f9c2cc4514","Type":"ContainerDied","Data":"7618ae40288f4e1dfa22dd32f952cf8acb084691e00808add1276be078ab402f"} Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.668481 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: E0123 13:36:49.669247 4771 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.669267 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.671489 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.938602 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.939964 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.940922 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.941466 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.942053 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:49 crc kubenswrapper[4771]: I0123 13:36:49.942356 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.001975 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.002589 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.002819 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.003005 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.003232 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir\") pod \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033254 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access\") pod \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033310 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033327 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54222a00-e7a5-4ceb-9b33-7e5a80a434c0" (UID: "54222a00-e7a5-4ceb-9b33-7e5a80a434c0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033344 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock\") pod \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\" (UID: \"54222a00-e7a5-4ceb-9b33-7e5a80a434c0\") " Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033377 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock" (OuterVolumeSpecName: "var-lock") pod "54222a00-e7a5-4ceb-9b33-7e5a80a434c0" (UID: "54222a00-e7a5-4ceb-9b33-7e5a80a434c0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033393 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033385 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033403 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033908 4771 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033924 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033933 4771 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033941 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.033948 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.041956 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54222a00-e7a5-4ceb-9b33-7e5a80a434c0" (UID: "54222a00-e7a5-4ceb-9b33-7e5a80a434c0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.135879 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54222a00-e7a5-4ceb-9b33-7e5a80a434c0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.676423 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"54222a00-e7a5-4ceb-9b33-7e5a80a434c0","Type":"ContainerDied","Data":"bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328"} Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.676841 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb76d1025ec10afb35381a310dd15d4e4c457607ccc373bc86a56a8ffd7ef328" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.676488 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.679165 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckzcq" event={"ID":"3bc17f58-61bd-4595-8d2b-83f9c2cc4514","Type":"ContainerStarted","Data":"0c162d34723098399ffb2c63c5c9fbc94919e54d1da48373197545ec10b0f191"} Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.679849 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.680329 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.680820 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.681436 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.682085 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.682897 4771 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6" exitCode=0 Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.682979 4771 scope.go:117] "RemoveContainer" containerID="1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.682983 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.692936 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.693401 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.694260 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.695155 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.704985 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.705510 4771 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.706013 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.706372 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.709946 4771 scope.go:117] "RemoveContainer" containerID="8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.727100 4771 scope.go:117] "RemoveContainer" containerID="6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.745759 4771 scope.go:117] "RemoveContainer" containerID="2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.765055 4771 scope.go:117] "RemoveContainer" containerID="c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.786077 4771 scope.go:117] "RemoveContainer" containerID="7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.820797 4771 scope.go:117] "RemoveContainer" containerID="1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.821524 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\": container with ID starting with 1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a not found: ID does not exist" containerID="1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.821559 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a"} err="failed to get container status \"1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\": rpc error: code = NotFound desc = could not find container \"1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a\": container with ID starting with 1d4dee910a2316a13502d7b5178afd328a213a3b1f20141bf717ee6faacc516a not found: ID does not exist" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.821585 4771 scope.go:117] "RemoveContainer" containerID="8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.822048 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\": container with ID starting with 8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93 not found: ID does not exist" containerID="8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.822073 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93"} err="failed to get container status \"8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\": rpc error: code = NotFound desc = could not find container \"8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93\": container with ID starting with 8bc4e8375006bb4ec66536650987af4dad3ddbc75118dd6db72830402acc0d93 not found: ID does not exist" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.822087 4771 scope.go:117] "RemoveContainer" containerID="6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.822844 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\": container with ID starting with 6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd not found: ID does not exist" containerID="6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.822872 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd"} err="failed to get container status \"6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\": rpc error: code = NotFound desc = could not find container \"6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd\": container with ID starting with 6698bc7c516f27c0f63fb951dbdaae0592e2b392c4b64c86d5bebf1c07c234dd not found: ID does not exist" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.822886 4771 scope.go:117] "RemoveContainer" containerID="2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.823336 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\": container with ID starting with 2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0 not found: ID does not exist" containerID="2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.823369 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0"} err="failed to get container status \"2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\": rpc error: code = NotFound desc = could not find container \"2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0\": container with ID starting with 2e8f03a55dd4b09340fbf26dcb55e7f9c3801d4cd67c2c3c28f0f07ab6c313c0 not found: ID does not exist" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.823388 4771 scope.go:117] "RemoveContainer" containerID="c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.823713 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\": container with ID starting with c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6 not found: ID does not exist" containerID="c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.823740 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6"} err="failed to get container status \"c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\": rpc error: code = NotFound desc = could not find container \"c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6\": container with ID starting with c50044079054b658a1de0bddfe3ef7b5d1eb82382f646eddc53adffbc74262e6 not found: ID does not exist" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.823753 4771 scope.go:117] "RemoveContainer" containerID="7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422" Jan 23 13:36:50 crc kubenswrapper[4771]: E0123 13:36:50.824303 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\": container with ID starting with 7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422 not found: ID does not exist" containerID="7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422" Jan 23 13:36:50 crc kubenswrapper[4771]: I0123 13:36:50.824323 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422"} err="failed to get container status \"7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\": rpc error: code = NotFound desc = could not find container \"7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422\": container with ID starting with 7f6ef7f318a06db960a5a2f17939b2442120583a8cb62646edcfd97f50af6422 not found: ID does not exist" Jan 23 13:36:51 crc kubenswrapper[4771]: I0123 13:36:51.234829 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 13:36:52 crc kubenswrapper[4771]: I0123 13:36:52.030528 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" Jan 23 13:36:52 crc kubenswrapper[4771]: I0123 13:36:52.031364 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:52 crc kubenswrapper[4771]: I0123 13:36:52.032159 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:52 crc kubenswrapper[4771]: I0123 13:36:52.032700 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:52 crc kubenswrapper[4771]: I0123 13:36:52.032955 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:52 crc kubenswrapper[4771]: E0123 13:36:52.044262 4771 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" volumeName="registry-storage" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.968954 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.970366 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.970644 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.970802 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.970950 4771 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:53 crc kubenswrapper[4771]: I0123 13:36:53.970976 4771 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 13:36:53 crc kubenswrapper[4771]: E0123 13:36:53.971119 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="200ms" Jan 23 13:36:54 crc kubenswrapper[4771]: E0123 13:36:54.172596 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="400ms" Jan 23 13:36:54 crc kubenswrapper[4771]: E0123 13:36:54.573880 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="800ms" Jan 23 13:36:55 crc kubenswrapper[4771]: E0123 13:36:55.125474 4771 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.243:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-9fcfx.188d5fa8e54a5a10 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-9fcfx,UID:ae249f45-cd5f-4837-9b26-cd4981147454,APIVersion:v1,ResourceVersion:29627,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 13:36:47.632472592 +0000 UTC m=+248.655010217,LastTimestamp:2026-01-23 13:36:47.632472592 +0000 UTC m=+248.655010217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 13:36:55 crc kubenswrapper[4771]: E0123 13:36:55.375506 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="1.6s" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.805036 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.805113 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.863145 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.864052 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.867563 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.867754 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:55 crc kubenswrapper[4771]: I0123 13:36:55.867902 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.394652 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.394771 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.438232 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.438834 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.439531 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.440058 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.440462 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.766757 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ckzcq" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.767247 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9fcfx" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.767807 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.768150 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.768642 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.768986 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.769365 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.769782 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.770046 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: I0123 13:36:56.770245 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:56 crc kubenswrapper[4771]: E0123 13:36:56.977675 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="3.2s" Jan 23 13:36:59 crc kubenswrapper[4771]: I0123 13:36:59.230770 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:59 crc kubenswrapper[4771]: I0123 13:36:59.231469 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:59 crc kubenswrapper[4771]: I0123 13:36:59.231801 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:36:59 crc kubenswrapper[4771]: I0123 13:36:59.232287 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:00 crc kubenswrapper[4771]: E0123 13:37:00.178845 4771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.243:6443: connect: connection refused" interval="6.4s" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.228059 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.229294 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.229601 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.229818 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.230007 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.248845 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.248894 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:00 crc kubenswrapper[4771]: E0123 13:37:00.249557 4771 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.250494 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:00 crc kubenswrapper[4771]: W0123 13:37:00.279671 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d61c9cc33db03bbf5ba2d38f3c90ba00a3b7fe36fa6520ab3f9f5240930f7b03 WatchSource:0}: Error finding container d61c9cc33db03bbf5ba2d38f3c90ba00a3b7fe36fa6520ab3f9f5240930f7b03: Status 404 returned error can't find the container with id d61c9cc33db03bbf5ba2d38f3c90ba00a3b7fe36fa6520ab3f9f5240930f7b03 Jan 23 13:37:00 crc kubenswrapper[4771]: I0123 13:37:00.745694 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d61c9cc33db03bbf5ba2d38f3c90ba00a3b7fe36fa6520ab3f9f5240930f7b03"} Jan 23 13:37:01 crc kubenswrapper[4771]: I0123 13:37:01.668134 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 13:37:01 crc kubenswrapper[4771]: I0123 13:37:01.668206 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 13:37:02 crc kubenswrapper[4771]: I0123 13:37:02.757719 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"46304b66e7ed325127d1f2401ddab8749dea96003ac57c06af96fa25ac2866d5"} Jan 23 13:37:03 crc kubenswrapper[4771]: I0123 13:37:03.899820 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerName="oauth-openshift" containerID="cri-o://abc9dd69238f5cc36c402cf0edee1e97067cb75f989d23fb723a0c2cccd20198" gracePeriod=15 Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.769373 4771 generic.go:334] "Generic (PLEG): container finished" podID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerID="abc9dd69238f5cc36c402cf0edee1e97067cb75f989d23fb723a0c2cccd20198" exitCode=0 Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.769471 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" event={"ID":"48f25d01-9b0c-4851-9f6b-4a49fc631e4c","Type":"ContainerDied","Data":"abc9dd69238f5cc36c402cf0edee1e97067cb75f989d23fb723a0c2cccd20198"} Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.769506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" event={"ID":"48f25d01-9b0c-4851-9f6b-4a49fc631e4c","Type":"ContainerDied","Data":"49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67"} Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.769521 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e4c4ae35c01eab6a955214305d6d841af2b0815930ffb784b96a4e8d12be67" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.770836 4771 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="46304b66e7ed325127d1f2401ddab8749dea96003ac57c06af96fa25ac2866d5" exitCode=0 Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.770876 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"46304b66e7ed325127d1f2401ddab8749dea96003ac57c06af96fa25ac2866d5"} Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.771102 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.771120 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:04 crc kubenswrapper[4771]: E0123 13:37:04.771431 4771 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.771836 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.772144 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.772596 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.772888 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.774310 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.774337 4771 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968" exitCode=1 Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.774354 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968"} Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.774617 4771 scope.go:117] "RemoveContainer" containerID="2e1816f0c054858eb920a5930adfc92e0cefe820aaf10d5fdc330baeace80968" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.775093 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.775287 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.775498 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.775650 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.775957 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.776712 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.777338 4771 status_manager.go:851] "Failed to get status for pod" podUID="3bc17f58-61bd-4595-8d2b-83f9c2cc4514" pod="openshift-marketplace/redhat-operators-ckzcq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ckzcq\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.777639 4771 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.778001 4771 status_manager.go:851] "Failed to get status for pod" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-c59wl\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.778284 4771 status_manager.go:851] "Failed to get status for pod" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.778493 4771 status_manager.go:851] "Failed to get status for pod" podUID="ae249f45-cd5f-4837-9b26-cd4981147454" pod="openshift-marketplace/redhat-marketplace-9fcfx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-9fcfx\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.778719 4771 status_manager.go:851] "Failed to get status for pod" podUID="220369ff-19c1-4d3d-9f30-8b46ba83b630" pod="openshift-image-registry/image-registry-66df7c8f76-pljhw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-pljhw\": dial tcp 38.102.83.243:6443: connect: connection refused" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856341 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856399 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856450 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856473 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856564 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856594 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6wlc\" (UniqueName: \"kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856638 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856663 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856699 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856739 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856763 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.856793 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs\") pod \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\" (UID: \"48f25d01-9b0c-4851-9f6b-4a49fc631e4c\") " Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.858558 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.859010 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.859024 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.859819 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.861145 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.863083 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.863322 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc" (OuterVolumeSpecName: "kube-api-access-d6wlc") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "kube-api-access-d6wlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.864086 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.864308 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.865268 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.865501 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.865885 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.869614 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.869842 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "48f25d01-9b0c-4851-9f6b-4a49fc631e4c" (UID: "48f25d01-9b0c-4851-9f6b-4a49fc631e4c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957829 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957862 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957875 4771 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957888 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6wlc\" (UniqueName: \"kubernetes.io/projected/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-kube-api-access-d6wlc\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957898 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957908 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957919 4771 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957930 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957939 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957949 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957958 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957966 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957976 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:04 crc kubenswrapper[4771]: I0123 13:37:04.957985 4771 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48f25d01-9b0c-4851-9f6b-4a49fc631e4c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.784506 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9697ee6fcead8101baeeb5b875957a3913e732326998bdd5d5d611f0aacd1761"} Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.785424 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ea80d4d2e3ce8899865b54aaa2db15492fe1e3e15a4b05f5af7cc3e80476399"} Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.785540 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e3be8a9fd059bc3ff955c1faf997f52555b2eeaeeff535e192fb9aae2d6fb4a"} Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.785604 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e3580b4b76e83833e0c623d2e06fa11aba4fcfd9a329f7596bb255d1b158db29"} Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.787947 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.788112 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c59wl" Jan 23 13:37:05 crc kubenswrapper[4771]: I0123 13:37:05.788749 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf78dee038c8ea9bc34f44095f8061beb78b5d5e3cc79318a557aa45edaa61be"} Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.220304 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.222118 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.222201 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.796773 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5cbf169c45ae2b012fa779e24d0078f053b40ba3cef6c6f7dc18284789724e2b"} Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.797122 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:06 crc kubenswrapper[4771]: I0123 13:37:06.797140 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:07 crc kubenswrapper[4771]: I0123 13:37:07.983804 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:37:10 crc kubenswrapper[4771]: I0123 13:37:10.251493 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:10 crc kubenswrapper[4771]: I0123 13:37:10.252047 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:10 crc kubenswrapper[4771]: I0123 13:37:10.252087 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:10 crc kubenswrapper[4771]: I0123 13:37:10.257712 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:11 crc kubenswrapper[4771]: I0123 13:37:11.805166 4771 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:11 crc kubenswrapper[4771]: I0123 13:37:11.822164 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:11 crc kubenswrapper[4771]: I0123 13:37:11.822194 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:11 crc kubenswrapper[4771]: I0123 13:37:11.826008 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:11 crc kubenswrapper[4771]: I0123 13:37:11.828985 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="00e65731-50f3-444b-b444-6fe3c94e32cf" Jan 23 13:37:12 crc kubenswrapper[4771]: I0123 13:37:12.840532 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:12 crc kubenswrapper[4771]: I0123 13:37:12.840577 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:16 crc kubenswrapper[4771]: I0123 13:37:16.220463 4771 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 13:37:16 crc kubenswrapper[4771]: I0123 13:37:16.220900 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 13:37:19 crc kubenswrapper[4771]: I0123 13:37:19.253433 4771 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="00e65731-50f3-444b-b444-6fe3c94e32cf" Jan 23 13:37:22 crc kubenswrapper[4771]: I0123 13:37:22.025062 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 13:37:22 crc kubenswrapper[4771]: I0123 13:37:22.298741 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 13:37:22 crc kubenswrapper[4771]: I0123 13:37:22.580908 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 13:37:22 crc kubenswrapper[4771]: I0123 13:37:22.774095 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 13:37:22 crc kubenswrapper[4771]: I0123 13:37:22.946134 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 13:37:23 crc kubenswrapper[4771]: I0123 13:37:23.261703 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 13:37:23 crc kubenswrapper[4771]: I0123 13:37:23.262302 4771 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 13:37:23 crc kubenswrapper[4771]: I0123 13:37:23.333574 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 13:37:23 crc kubenswrapper[4771]: I0123 13:37:23.662336 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.070462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.117120 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.158825 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.176563 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.382686 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.443601 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.482342 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.608919 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.638058 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.720601 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.829658 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 13:37:24 crc kubenswrapper[4771]: I0123 13:37:24.969314 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.011101 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.028689 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.177854 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.329734 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.434802 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.486942 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.526573 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.580112 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.605733 4771 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.676379 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.777882 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.920327 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 13:37:25 crc kubenswrapper[4771]: I0123 13:37:25.978342 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.109729 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.122020 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.134340 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.177093 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.223915 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.228398 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.341813 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.443040 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.443730 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.578559 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.605473 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.721906 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.850584 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.896351 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.935983 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.966540 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 13:37:26 crc kubenswrapper[4771]: I0123 13:37:26.973586 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.003795 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.112937 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.256931 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.262548 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.265149 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.378636 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.526683 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.573333 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.635574 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.694501 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.794156 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.794185 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.908518 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.911608 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.934990 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.935051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 13:37:27 crc kubenswrapper[4771]: I0123 13:37:27.942315 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.028866 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.166925 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.229867 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.366178 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.441768 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.501825 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.546089 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.573520 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.630598 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.650245 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.823263 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.839937 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.889537 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 13:37:28 crc kubenswrapper[4771]: I0123 13:37:28.914882 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.003224 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.022289 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.059242 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.076574 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.082591 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.252730 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.273830 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.521473 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.617650 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.681735 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.790962 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.849183 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.947980 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.979118 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 13:37:29 crc kubenswrapper[4771]: I0123 13:37:29.990691 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.139942 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.162333 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.173238 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.202972 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.353955 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.368069 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.370611 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.508034 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.668751 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.854640 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.898592 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.923974 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.941564 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.957544 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.979793 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 13:37:30 crc kubenswrapper[4771]: I0123 13:37:30.997779 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.027236 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.036005 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.037112 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.060828 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.097607 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.141089 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.348262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.402787 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.459086 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.509533 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.567684 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.606003 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.717079 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.774588 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.827489 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.831236 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 13:37:31 crc kubenswrapper[4771]: I0123 13:37:31.866377 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.012218 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.013577 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.046788 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.114494 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.136892 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.193687 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.223716 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.233697 4771 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.275125 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.297834 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.298118 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.312861 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.358384 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.376637 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.409619 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.429186 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.451569 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.528511 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.553105 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.656603 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.740112 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.808713 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.901042 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.927262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.958836 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 13:37:32 crc kubenswrapper[4771]: I0123 13:37:32.992370 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.070786 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.146017 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.241088 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.285385 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.299184 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.359980 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.597912 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.659104 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.674972 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.736751 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.742188 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.793932 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.868386 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.879296 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.881517 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.883864 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.901461 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.927752 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 13:37:33 crc kubenswrapper[4771]: I0123 13:37:33.974192 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.076256 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.111432 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.254550 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.269205 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.372643 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.522435 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.558931 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.580259 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.619470 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.723201 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 13:37:34 crc kubenswrapper[4771]: I0123 13:37:34.737296 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.118274 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.260203 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.451624 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.550808 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.606011 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.664533 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.803093 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 13:37:35 crc kubenswrapper[4771]: I0123 13:37:35.913826 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.085609 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.492771 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.514189 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.607984 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.649404 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.765549 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.831771 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.958223 4771 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.960536 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9fcfx" podStartSLOduration=50.532830498 podStartE2EDuration="51.96051336s" podCreationTimestamp="2026-01-23 13:36:45 +0000 UTC" firstStartedPulling="2026-01-23 13:36:46.616583998 +0000 UTC m=+247.639121623" lastFinishedPulling="2026-01-23 13:36:48.04426686 +0000 UTC m=+249.066804485" observedRunningTime="2026-01-23 13:37:11.528532858 +0000 UTC m=+272.551070483" watchObservedRunningTime="2026-01-23 13:37:36.96051336 +0000 UTC m=+297.983050985" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.961870 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ckzcq" podStartSLOduration=48.436916604 podStartE2EDuration="50.961863294s" podCreationTimestamp="2026-01-23 13:36:46 +0000 UTC" firstStartedPulling="2026-01-23 13:36:47.632686239 +0000 UTC m=+248.655223864" lastFinishedPulling="2026-01-23 13:36:50.157632929 +0000 UTC m=+251.180170554" observedRunningTime="2026-01-23 13:37:11.571131137 +0000 UTC m=+272.593668762" watchObservedRunningTime="2026-01-23 13:37:36.961863294 +0000 UTC m=+297.984400919" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.963876 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-c59wl"] Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.964011 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 13:37:36 crc kubenswrapper[4771]: E0123 13:37:36.964279 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerName="oauth-openshift" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.964305 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerName="oauth-openshift" Jan 23 13:37:36 crc kubenswrapper[4771]: E0123 13:37:36.964317 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" containerName="installer" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.964326 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" containerName="installer" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.964489 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" containerName="oauth-openshift" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.964512 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="54222a00-e7a5-4ceb-9b33-7e5a80a434c0" containerName="installer" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.965051 4771 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.965100 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.965108 4771 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a8e30445-3412-4c78-8100-621a5938da93" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.969003 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.970158 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.970282 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.970701 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.971103 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.971487 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.971714 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.971840 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.972132 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.972213 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.973745 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.973936 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.976054 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.985610 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.986968 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.993646 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 13:37:36 crc kubenswrapper[4771]: I0123 13:37:36.997051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.013617 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015591 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015653 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7ld\" (UniqueName: \"kubernetes.io/projected/f6514455-f740-4ee7-96cd-98dc3ededcd6-kube-api-access-pj7ld\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015681 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-login\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015799 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-session\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015834 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-dir\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-error\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015893 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-policies\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.015918 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.016155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.016251 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.016302 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.016333 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.029930 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.029913438 podStartE2EDuration="26.029913438s" podCreationTimestamp="2026-01-23 13:37:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:37:37.011939676 +0000 UTC m=+298.034477301" watchObservedRunningTime="2026-01-23 13:37:37.029913438 +0000 UTC m=+298.052451063" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117422 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117497 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117525 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117556 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117581 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7ld\" (UniqueName: \"kubernetes.io/projected/f6514455-f740-4ee7-96cd-98dc3ededcd6-kube-api-access-pj7ld\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117599 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-login\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117686 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-session\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117747 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-dir\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-error\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-policies\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117810 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.117841 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.118647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-dir\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.119777 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-audit-policies\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.120092 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.120850 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.121699 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.126009 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.127485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-error\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.129536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.129871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.131648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.132324 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.132861 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-system-session\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.135867 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f6514455-f740-4ee7-96cd-98dc3ededcd6-v4-0-config-user-template-login\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.140759 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7ld\" (UniqueName: \"kubernetes.io/projected/f6514455-f740-4ee7-96cd-98dc3ededcd6-kube-api-access-pj7ld\") pod \"oauth-openshift-6cbd7f6dbc-95kc7\" (UID: \"f6514455-f740-4ee7-96cd-98dc3ededcd6\") " pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.235896 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f25d01-9b0c-4851-9f6b-4a49fc631e4c" path="/var/lib/kubelet/pods/48f25d01-9b0c-4851-9f6b-4a49fc631e4c/volumes" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.296615 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.527902 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7"] Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.602093 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.989556 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" event={"ID":"f6514455-f740-4ee7-96cd-98dc3ededcd6","Type":"ContainerStarted","Data":"40f83785c63da81f36412efcd63fc52581243d36bd4e1d1b478a21a7c94e7a53"} Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.989995 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" event={"ID":"f6514455-f740-4ee7-96cd-98dc3ededcd6","Type":"ContainerStarted","Data":"661765fa212f529d999c2c22c45082290f1118b3a9487b3c8b5799aa4bd0d325"} Jan 23 13:37:37 crc kubenswrapper[4771]: I0123 13:37:37.990023 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:38 crc kubenswrapper[4771]: I0123 13:37:38.010382 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" podStartSLOduration=60.010362013 podStartE2EDuration="1m0.010362013s" podCreationTimestamp="2026-01-23 13:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:37:38.00624228 +0000 UTC m=+299.028779915" watchObservedRunningTime="2026-01-23 13:37:38.010362013 +0000 UTC m=+299.032899628" Jan 23 13:37:38 crc kubenswrapper[4771]: I0123 13:37:38.246244 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6cbd7f6dbc-95kc7" Jan 23 13:37:39 crc kubenswrapper[4771]: I0123 13:37:39.107730 4771 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 13:37:44 crc kubenswrapper[4771]: I0123 13:37:44.346997 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 13:37:44 crc kubenswrapper[4771]: I0123 13:37:44.471384 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:37:45 crc kubenswrapper[4771]: I0123 13:37:45.564835 4771 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 13:37:45 crc kubenswrapper[4771]: I0123 13:37:45.565133 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f1a4be7f8aaea31e0995e9e2972414bcade7748788637cedfca2573235a6a3d8" gracePeriod=5 Jan 23 13:37:47 crc kubenswrapper[4771]: I0123 13:37:47.803295 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 13:37:48 crc kubenswrapper[4771]: I0123 13:37:48.878681 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 13:37:49 crc kubenswrapper[4771]: I0123 13:37:49.192621 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 13:37:50 crc kubenswrapper[4771]: I0123 13:37:50.715862 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.073227 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.073284 4771 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f1a4be7f8aaea31e0995e9e2972414bcade7748788637cedfca2573235a6a3d8" exitCode=137 Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.154698 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.154795 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216311 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216368 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216445 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216485 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216645 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216680 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216695 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.216686 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.224290 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.237996 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.318700 4771 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.318752 4771 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.318766 4771 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.318777 4771 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.318788 4771 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.360620 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 13:37:51 crc kubenswrapper[4771]: I0123 13:37:51.437140 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 13:37:52 crc kubenswrapper[4771]: I0123 13:37:52.081315 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 13:37:52 crc kubenswrapper[4771]: I0123 13:37:52.081399 4771 scope.go:117] "RemoveContainer" containerID="f1a4be7f8aaea31e0995e9e2972414bcade7748788637cedfca2573235a6a3d8" Jan 23 13:37:52 crc kubenswrapper[4771]: I0123 13:37:52.081496 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 13:37:53 crc kubenswrapper[4771]: I0123 13:37:53.209672 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 13:37:55 crc kubenswrapper[4771]: I0123 13:37:55.212510 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 13:37:55 crc kubenswrapper[4771]: I0123 13:37:55.979695 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 13:37:56 crc kubenswrapper[4771]: I0123 13:37:56.215207 4771 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 13:38:00 crc kubenswrapper[4771]: I0123 13:38:00.556664 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 13:38:01 crc kubenswrapper[4771]: I0123 13:38:01.592704 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 13:38:02 crc kubenswrapper[4771]: I0123 13:38:02.177914 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 13:38:02 crc kubenswrapper[4771]: I0123 13:38:02.508004 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 13:38:02 crc kubenswrapper[4771]: I0123 13:38:02.771472 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 13:38:02 crc kubenswrapper[4771]: I0123 13:38:02.970838 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 13:38:03 crc kubenswrapper[4771]: I0123 13:38:03.541026 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 13:38:04 crc kubenswrapper[4771]: I0123 13:38:04.459906 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 13:38:04 crc kubenswrapper[4771]: I0123 13:38:04.855025 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 13:38:06 crc kubenswrapper[4771]: I0123 13:38:06.200291 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 13:38:08 crc kubenswrapper[4771]: I0123 13:38:08.460440 4771 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 13:38:09 crc kubenswrapper[4771]: I0123 13:38:09.235479 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 13:38:09 crc kubenswrapper[4771]: I0123 13:38:09.323868 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 13:38:09 crc kubenswrapper[4771]: I0123 13:38:09.520759 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" podUID="44d2ff5d-162b-4773-ac29-54fa11375b9a" containerName="registry" containerID="cri-o://64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3" gracePeriod=30 Jan 23 13:38:09 crc kubenswrapper[4771]: I0123 13:38:09.889569 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.071686 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.071849 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.071898 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.071982 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7gw8\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.072296 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.072378 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.072406 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.072511 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token\") pod \"44d2ff5d-162b-4773-ac29-54fa11375b9a\" (UID: \"44d2ff5d-162b-4773-ac29-54fa11375b9a\") " Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.073011 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.073158 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.078797 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.079626 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8" (OuterVolumeSpecName: "kube-api-access-s7gw8") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "kube-api-access-s7gw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.079780 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.079813 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.085837 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.090299 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "44d2ff5d-162b-4773-ac29-54fa11375b9a" (UID: "44d2ff5d-162b-4773-ac29-54fa11375b9a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.173988 4771 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174027 4771 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174041 4771 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/44d2ff5d-162b-4773-ac29-54fa11375b9a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174053 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174062 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7gw8\" (UniqueName: \"kubernetes.io/projected/44d2ff5d-162b-4773-ac29-54fa11375b9a-kube-api-access-s7gw8\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174070 4771 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/44d2ff5d-162b-4773-ac29-54fa11375b9a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.174078 4771 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/44d2ff5d-162b-4773-ac29-54fa11375b9a-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.182172 4771 generic.go:334] "Generic (PLEG): container finished" podID="44d2ff5d-162b-4773-ac29-54fa11375b9a" containerID="64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3" exitCode=0 Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.182232 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" event={"ID":"44d2ff5d-162b-4773-ac29-54fa11375b9a","Type":"ContainerDied","Data":"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3"} Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.182272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" event={"ID":"44d2ff5d-162b-4773-ac29-54fa11375b9a","Type":"ContainerDied","Data":"bbe82e8a8236b79303fc6a9ef72b9df80b3f15ae955ddef8c377065ee093b77c"} Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.182295 4771 scope.go:117] "RemoveContainer" containerID="64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.182373 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-grzg6" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.198801 4771 scope.go:117] "RemoveContainer" containerID="64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3" Jan 23 13:38:10 crc kubenswrapper[4771]: E0123 13:38:10.199473 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3\": container with ID starting with 64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3 not found: ID does not exist" containerID="64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.199521 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3"} err="failed to get container status \"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3\": rpc error: code = NotFound desc = could not find container \"64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3\": container with ID starting with 64436f60c28a57fcc69622b075e841820aceaeb70ab5874ce1fade6a382b96f3 not found: ID does not exist" Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.221234 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.226617 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-grzg6"] Jan 23 13:38:10 crc kubenswrapper[4771]: I0123 13:38:10.904534 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 13:38:11 crc kubenswrapper[4771]: I0123 13:38:11.235041 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d2ff5d-162b-4773-ac29-54fa11375b9a" path="/var/lib/kubelet/pods/44d2ff5d-162b-4773-ac29-54fa11375b9a/volumes" Jan 23 13:38:12 crc kubenswrapper[4771]: I0123 13:38:12.421012 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 13:38:12 crc kubenswrapper[4771]: I0123 13:38:12.716387 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 13:38:14 crc kubenswrapper[4771]: I0123 13:38:14.011850 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 13:38:14 crc kubenswrapper[4771]: I0123 13:38:14.255772 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 13:38:14 crc kubenswrapper[4771]: I0123 13:38:14.375902 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 13:38:15 crc kubenswrapper[4771]: I0123 13:38:15.095235 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 13:38:19 crc kubenswrapper[4771]: I0123 13:38:19.126246 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.580097 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8jsw"] Jan 23 13:38:27 crc kubenswrapper[4771]: E0123 13:38:27.581017 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d2ff5d-162b-4773-ac29-54fa11375b9a" containerName="registry" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.581031 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d2ff5d-162b-4773-ac29-54fa11375b9a" containerName="registry" Jan 23 13:38:27 crc kubenswrapper[4771]: E0123 13:38:27.581052 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.581058 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.581166 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d2ff5d-162b-4773-ac29-54fa11375b9a" containerName="registry" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.581176 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.582045 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.586879 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.592184 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jsw"] Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.693191 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntsb\" (UniqueName: \"kubernetes.io/projected/77a36b6a-701f-46d6-9415-c7d2546a9fd7-kube-api-access-hntsb\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.693258 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-catalog-content\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.693316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-utilities\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.775456 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mxvj7"] Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.776553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.778477 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.786942 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxvj7"] Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.794744 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-utilities\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.794961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hntsb\" (UniqueName: \"kubernetes.io/projected/77a36b6a-701f-46d6-9415-c7d2546a9fd7-kube-api-access-hntsb\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.795002 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-catalog-content\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.795210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-utilities\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.795452 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77a36b6a-701f-46d6-9415-c7d2546a9fd7-catalog-content\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.820478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hntsb\" (UniqueName: \"kubernetes.io/projected/77a36b6a-701f-46d6-9415-c7d2546a9fd7-kube-api-access-hntsb\") pod \"community-operators-j8jsw\" (UID: \"77a36b6a-701f-46d6-9415-c7d2546a9fd7\") " pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.897218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnw6m\" (UniqueName: \"kubernetes.io/projected/42c56fef-fece-4a79-ac6e-dc70d22b414c-kube-api-access-vnw6m\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.897285 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-catalog-content\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.897339 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-utilities\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.900009 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.998055 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-utilities\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.998613 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnw6m\" (UniqueName: \"kubernetes.io/projected/42c56fef-fece-4a79-ac6e-dc70d22b414c-kube-api-access-vnw6m\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.998626 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-utilities\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.998644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-catalog-content\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:27 crc kubenswrapper[4771]: I0123 13:38:27.998914 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c56fef-fece-4a79-ac6e-dc70d22b414c-catalog-content\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.017640 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnw6m\" (UniqueName: \"kubernetes.io/projected/42c56fef-fece-4a79-ac6e-dc70d22b414c-kube-api-access-vnw6m\") pod \"certified-operators-mxvj7\" (UID: \"42c56fef-fece-4a79-ac6e-dc70d22b414c\") " pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.090403 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.129223 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8jsw"] Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.279098 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jsw" event={"ID":"77a36b6a-701f-46d6-9415-c7d2546a9fd7","Type":"ContainerStarted","Data":"de21e0ee129bd9c68def29a03c757194321719f81900ce59be6ef747dff47d42"} Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.308009 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mxvj7"] Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.352333 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.352591 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" podUID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" containerName="controller-manager" containerID="cri-o://cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b" gracePeriod=30 Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.446370 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.446659 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" containerName="route-controller-manager" containerID="cri-o://aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa" gracePeriod=30 Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.699647 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.777457 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.808621 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles\") pod \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.808731 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config\") pod \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.808772 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca\") pod \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.808853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckgh9\" (UniqueName: \"kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9\") pod \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.808876 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert\") pod \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\" (UID: \"9feed86a-3d92-4b4b-81aa-57ddf242e7ed\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.809893 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "9feed86a-3d92-4b4b-81aa-57ddf242e7ed" (UID: "9feed86a-3d92-4b4b-81aa-57ddf242e7ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.810098 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config" (OuterVolumeSpecName: "config") pod "9feed86a-3d92-4b4b-81aa-57ddf242e7ed" (UID: "9feed86a-3d92-4b4b-81aa-57ddf242e7ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.810742 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9feed86a-3d92-4b4b-81aa-57ddf242e7ed" (UID: "9feed86a-3d92-4b4b-81aa-57ddf242e7ed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.815345 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9" (OuterVolumeSpecName: "kube-api-access-ckgh9") pod "9feed86a-3d92-4b4b-81aa-57ddf242e7ed" (UID: "9feed86a-3d92-4b4b-81aa-57ddf242e7ed"). InnerVolumeSpecName "kube-api-access-ckgh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.815566 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9feed86a-3d92-4b4b-81aa-57ddf242e7ed" (UID: "9feed86a-3d92-4b4b-81aa-57ddf242e7ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910537 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") pod \"a88dbdcd-6064-4186-8edd-16341379ef97\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910614 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") pod \"a88dbdcd-6064-4186-8edd-16341379ef97\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910677 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlw62\" (UniqueName: \"kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62\") pod \"a88dbdcd-6064-4186-8edd-16341379ef97\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910706 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") pod \"a88dbdcd-6064-4186-8edd-16341379ef97\" (UID: \"a88dbdcd-6064-4186-8edd-16341379ef97\") " Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910919 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910931 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910940 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910948 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckgh9\" (UniqueName: \"kubernetes.io/projected/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-kube-api-access-ckgh9\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.910958 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9feed86a-3d92-4b4b-81aa-57ddf242e7ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.911764 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca" (OuterVolumeSpecName: "client-ca") pod "a88dbdcd-6064-4186-8edd-16341379ef97" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.912273 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config" (OuterVolumeSpecName: "config") pod "a88dbdcd-6064-4186-8edd-16341379ef97" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.915921 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a88dbdcd-6064-4186-8edd-16341379ef97" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:28 crc kubenswrapper[4771]: I0123 13:38:28.916036 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62" (OuterVolumeSpecName: "kube-api-access-rlw62") pod "a88dbdcd-6064-4186-8edd-16341379ef97" (UID: "a88dbdcd-6064-4186-8edd-16341379ef97"). InnerVolumeSpecName "kube-api-access-rlw62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.011878 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.011940 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a88dbdcd-6064-4186-8edd-16341379ef97-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.011960 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlw62\" (UniqueName: \"kubernetes.io/projected/a88dbdcd-6064-4186-8edd-16341379ef97-kube-api-access-rlw62\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.011974 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a88dbdcd-6064-4186-8edd-16341379ef97-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.104547 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx"] Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.104787 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" containerName="controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.104814 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" containerName="controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.104825 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" containerName="route-controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.104833 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" containerName="route-controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.104925 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" containerName="route-controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.104940 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" containerName="controller-manager" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.105318 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.117766 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.118477 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.127253 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.139957 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.187147 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx"] Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.187946 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-wdjcq proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" podUID="5a94951f-d778-4ee0-bd56-5a95a13a11bb" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.199235 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k"] Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.199659 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-c5wkl serving-cert], unattached volumes=[], failed to process volumes=[client-ca config kube-api-access-c5wkl serving-cert]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" podUID="a43a091f-ccff-452d-a621-ef07cdfd6888" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.213791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.214311 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjcq\" (UniqueName: \"kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.214523 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.214626 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.214720 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.285340 4771 generic.go:334] "Generic (PLEG): container finished" podID="42c56fef-fece-4a79-ac6e-dc70d22b414c" containerID="db74f8cff071b4de8b75e618f19316563d426b4fe892556f3245f1b7c9f94bf9" exitCode=0 Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.285656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvj7" event={"ID":"42c56fef-fece-4a79-ac6e-dc70d22b414c","Type":"ContainerDied","Data":"db74f8cff071b4de8b75e618f19316563d426b4fe892556f3245f1b7c9f94bf9"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.285757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvj7" event={"ID":"42c56fef-fece-4a79-ac6e-dc70d22b414c","Type":"ContainerStarted","Data":"3d5d4e813e2247f2486b6ff9b02d1aad63051db4c2dce84772b3a618517b5a49"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.287650 4771 generic.go:334] "Generic (PLEG): container finished" podID="a88dbdcd-6064-4186-8edd-16341379ef97" containerID="aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa" exitCode=0 Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.287725 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.287669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" event={"ID":"a88dbdcd-6064-4186-8edd-16341379ef97","Type":"ContainerDied","Data":"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.287772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz" event={"ID":"a88dbdcd-6064-4186-8edd-16341379ef97","Type":"ContainerDied","Data":"8bcf63040344cd807116ec88aa4199ebc847ad0efbe657f3978699a2d69d6c4a"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.287802 4771 scope.go:117] "RemoveContainer" containerID="aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.290857 4771 generic.go:334] "Generic (PLEG): container finished" podID="77a36b6a-701f-46d6-9415-c7d2546a9fd7" containerID="8197c725a2b4e5995bab5a24184709061f2440d2f1bbb4fa227fcff94aa72d98" exitCode=0 Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.290921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jsw" event={"ID":"77a36b6a-701f-46d6-9415-c7d2546a9fd7","Type":"ContainerDied","Data":"8197c725a2b4e5995bab5a24184709061f2440d2f1bbb4fa227fcff94aa72d98"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293391 4771 generic.go:334] "Generic (PLEG): container finished" podID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" containerID="cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b" exitCode=0 Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293473 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293497 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" event={"ID":"9feed86a-3d92-4b4b-81aa-57ddf242e7ed","Type":"ContainerDied","Data":"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xgb8j" event={"ID":"9feed86a-3d92-4b4b-81aa-57ddf242e7ed","Type":"ContainerDied","Data":"4e2eb5dc02cd700725b2a800f43e73c89cce2bf2de6a6e55a522bdd12fdfa8f8"} Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.293661 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.305037 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.308933 4771 scope.go:117] "RemoveContainer" containerID="aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa" Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.309586 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa\": container with ID starting with aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa not found: ID does not exist" containerID="aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.309841 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa"} err="failed to get container status \"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa\": rpc error: code = NotFound desc = could not find container \"aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa\": container with ID starting with aed5ce9ee6693cee9e918bb28fdb29c48d1faf41c98251b06b1d742b6cfb2afa not found: ID does not exist" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.309878 4771 scope.go:117] "RemoveContainer" containerID="cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.311147 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317103 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317277 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317375 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wkl\" (UniqueName: \"kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317445 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjcq\" (UniqueName: \"kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.317483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.320055 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.320964 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.321851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.322266 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.323276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.325500 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-skbsz"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.327663 4771 scope.go:117] "RemoveContainer" containerID="cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b" Jan 23 13:38:29 crc kubenswrapper[4771]: E0123 13:38:29.328127 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b\": container with ID starting with cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b not found: ID does not exist" containerID="cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.328181 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b"} err="failed to get container status \"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b\": rpc error: code = NotFound desc = could not find container \"cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b\": container with ID starting with cd986e28fd35794494a0015226aad09b4bc113a9f1420a890971115d5af42e2b not found: ID does not exist" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.346679 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.348131 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjcq\" (UniqueName: \"kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq\") pod \"controller-manager-7fcfbf5659-qt6sx\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.350788 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xgb8j"] Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.419235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.419530 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.419562 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.419611 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5wkl\" (UniqueName: \"kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.420881 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.422456 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.422888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.439661 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5wkl\" (UniqueName: \"kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl\") pod \"route-controller-manager-6987b54645-xm54k\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.520581 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert\") pod \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.520632 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config\") pod \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.520673 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdjcq\" (UniqueName: \"kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq\") pod \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.520707 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles\") pod \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.520761 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca\") pod \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\" (UID: \"5a94951f-d778-4ee0-bd56-5a95a13a11bb\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.521341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5a94951f-d778-4ee0-bd56-5a95a13a11bb" (UID: "5a94951f-d778-4ee0-bd56-5a95a13a11bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.521370 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "5a94951f-d778-4ee0-bd56-5a95a13a11bb" (UID: "5a94951f-d778-4ee0-bd56-5a95a13a11bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.522079 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config" (OuterVolumeSpecName: "config") pod "5a94951f-d778-4ee0-bd56-5a95a13a11bb" (UID: "5a94951f-d778-4ee0-bd56-5a95a13a11bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.524404 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq" (OuterVolumeSpecName: "kube-api-access-wdjcq") pod "5a94951f-d778-4ee0-bd56-5a95a13a11bb" (UID: "5a94951f-d778-4ee0-bd56-5a95a13a11bb"). InnerVolumeSpecName "kube-api-access-wdjcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.525298 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5a94951f-d778-4ee0-bd56-5a95a13a11bb" (UID: "5a94951f-d778-4ee0-bd56-5a95a13a11bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.621994 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca\") pod \"a43a091f-ccff-452d-a621-ef07cdfd6888\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622122 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert\") pod \"a43a091f-ccff-452d-a621-ef07cdfd6888\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622152 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5wkl\" (UniqueName: \"kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl\") pod \"a43a091f-ccff-452d-a621-ef07cdfd6888\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config\") pod \"a43a091f-ccff-452d-a621-ef07cdfd6888\" (UID: \"a43a091f-ccff-452d-a621-ef07cdfd6888\") " Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622392 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622473 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a94951f-d778-4ee0-bd56-5a95a13a11bb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622486 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622498 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdjcq\" (UniqueName: \"kubernetes.io/projected/5a94951f-d778-4ee0-bd56-5a95a13a11bb-kube-api-access-wdjcq\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622513 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a94951f-d778-4ee0-bd56-5a95a13a11bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca" (OuterVolumeSpecName: "client-ca") pod "a43a091f-ccff-452d-a621-ef07cdfd6888" (UID: "a43a091f-ccff-452d-a621-ef07cdfd6888"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.622975 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config" (OuterVolumeSpecName: "config") pod "a43a091f-ccff-452d-a621-ef07cdfd6888" (UID: "a43a091f-ccff-452d-a621-ef07cdfd6888"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.624873 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl" (OuterVolumeSpecName: "kube-api-access-c5wkl") pod "a43a091f-ccff-452d-a621-ef07cdfd6888" (UID: "a43a091f-ccff-452d-a621-ef07cdfd6888"). InnerVolumeSpecName "kube-api-access-c5wkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.624965 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a43a091f-ccff-452d-a621-ef07cdfd6888" (UID: "a43a091f-ccff-452d-a621-ef07cdfd6888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.723491 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.723528 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a43a091f-ccff-452d-a621-ef07cdfd6888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.723542 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5wkl\" (UniqueName: \"kubernetes.io/projected/a43a091f-ccff-452d-a621-ef07cdfd6888-kube-api-access-c5wkl\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:29 crc kubenswrapper[4771]: I0123 13:38:29.723556 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a43a091f-ccff-452d-a621-ef07cdfd6888-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.302872 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jsw" event={"ID":"77a36b6a-701f-46d6-9415-c7d2546a9fd7","Type":"ContainerStarted","Data":"7fe089fcba89c89b67ce6a9b21fca5cd12e0830a6163867c5c6e4d1f0641b806"} Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.304391 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.304600 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.312139 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.312207 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.382355 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.387213 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.392099 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.392363 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.396541 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.396994 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.397269 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.397329 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.399915 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-qt6sx"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.402557 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.407470 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.408038 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.418592 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.426985 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-xm54k"] Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.433016 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.433125 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.433155 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.433179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.433226 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fw5r\" (UniqueName: \"kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.534311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.534436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fw5r\" (UniqueName: \"kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.534499 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.534554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.534580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.536191 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.536246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.536987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.559668 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.563158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fw5r\" (UniqueName: \"kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r\") pod \"controller-manager-7f666bcdf6-vrwfv\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.765938 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:30 crc kubenswrapper[4771]: I0123 13:38:30.966789 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:30 crc kubenswrapper[4771]: W0123 13:38:30.976317 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9baa2e_f04b_4770_a50d_8ca39e51abb4.slice/crio-ef060b2738d23bdd3f24ecf126a7369474ccfddf7c9ad0035a5bc544bf2a2aca WatchSource:0}: Error finding container ef060b2738d23bdd3f24ecf126a7369474ccfddf7c9ad0035a5bc544bf2a2aca: Status 404 returned error can't find the container with id ef060b2738d23bdd3f24ecf126a7369474ccfddf7c9ad0035a5bc544bf2a2aca Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.236398 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a94951f-d778-4ee0-bd56-5a95a13a11bb" path="/var/lib/kubelet/pods/5a94951f-d778-4ee0-bd56-5a95a13a11bb/volumes" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.236883 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9feed86a-3d92-4b4b-81aa-57ddf242e7ed" path="/var/lib/kubelet/pods/9feed86a-3d92-4b4b-81aa-57ddf242e7ed/volumes" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.237822 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43a091f-ccff-452d-a621-ef07cdfd6888" path="/var/lib/kubelet/pods/a43a091f-ccff-452d-a621-ef07cdfd6888/volumes" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.238186 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88dbdcd-6064-4186-8edd-16341379ef97" path="/var/lib/kubelet/pods/a88dbdcd-6064-4186-8edd-16341379ef97/volumes" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.311603 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" event={"ID":"5f9baa2e-f04b-4770-a50d-8ca39e51abb4","Type":"ContainerStarted","Data":"babc3287f4be19c1e233b05eb258a0844df71352771436be4be3453ecfb30c93"} Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.311648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" event={"ID":"5f9baa2e-f04b-4770-a50d-8ca39e51abb4","Type":"ContainerStarted","Data":"ef060b2738d23bdd3f24ecf126a7369474ccfddf7c9ad0035a5bc544bf2a2aca"} Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.311996 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.315201 4771 generic.go:334] "Generic (PLEG): container finished" podID="42c56fef-fece-4a79-ac6e-dc70d22b414c" containerID="01d9c13b6dbc7d931e23f6a5c64b47ec224705bf9810082ead73be7852f7ed3d" exitCode=0 Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.315282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvj7" event={"ID":"42c56fef-fece-4a79-ac6e-dc70d22b414c","Type":"ContainerDied","Data":"01d9c13b6dbc7d931e23f6a5c64b47ec224705bf9810082ead73be7852f7ed3d"} Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.317144 4771 generic.go:334] "Generic (PLEG): container finished" podID="77a36b6a-701f-46d6-9415-c7d2546a9fd7" containerID="7fe089fcba89c89b67ce6a9b21fca5cd12e0830a6163867c5c6e4d1f0641b806" exitCode=0 Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.317198 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jsw" event={"ID":"77a36b6a-701f-46d6-9415-c7d2546a9fd7","Type":"ContainerDied","Data":"7fe089fcba89c89b67ce6a9b21fca5cd12e0830a6163867c5c6e4d1f0641b806"} Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.322338 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:31 crc kubenswrapper[4771]: I0123 13:38:31.337155 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" podStartSLOduration=2.337135927 podStartE2EDuration="2.337135927s" podCreationTimestamp="2026-01-23 13:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:38:31.336811596 +0000 UTC m=+352.359349231" watchObservedRunningTime="2026-01-23 13:38:31.337135927 +0000 UTC m=+352.359673542" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.324892 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8jsw" event={"ID":"77a36b6a-701f-46d6-9415-c7d2546a9fd7","Type":"ContainerStarted","Data":"02c7a3ffd3b7b420c01fd299e5396fa63d9b6b6f708d7df65712bf1e0e532613"} Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.326971 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mxvj7" event={"ID":"42c56fef-fece-4a79-ac6e-dc70d22b414c","Type":"ContainerStarted","Data":"f488e51bd9330466bcdc4e6122d26c1b23e5e1c21be38681d7ab9315ab852843"} Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.343138 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8jsw" podStartSLOduration=2.94010403 podStartE2EDuration="5.34311576s" podCreationTimestamp="2026-01-23 13:38:27 +0000 UTC" firstStartedPulling="2026-01-23 13:38:29.294636103 +0000 UTC m=+350.317173728" lastFinishedPulling="2026-01-23 13:38:31.697647833 +0000 UTC m=+352.720185458" observedRunningTime="2026-01-23 13:38:32.341353003 +0000 UTC m=+353.363890628" watchObservedRunningTime="2026-01-23 13:38:32.34311576 +0000 UTC m=+353.365653395" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.358449 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mxvj7" podStartSLOduration=2.939186999 podStartE2EDuration="5.358428205s" podCreationTimestamp="2026-01-23 13:38:27 +0000 UTC" firstStartedPulling="2026-01-23 13:38:29.287112049 +0000 UTC m=+350.309649674" lastFinishedPulling="2026-01-23 13:38:31.706353255 +0000 UTC m=+352.728890880" observedRunningTime="2026-01-23 13:38:32.356157172 +0000 UTC m=+353.378694797" watchObservedRunningTime="2026-01-23 13:38:32.358428205 +0000 UTC m=+353.380965830" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.849586 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.850431 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.853619 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.854300 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.854401 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.856689 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.856715 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.856904 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.865645 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.964699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.964784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgzrt\" (UniqueName: \"kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.964865 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:32 crc kubenswrapper[4771]: I0123 13:38:32.965216 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.066028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.066097 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.066146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.066189 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgzrt\" (UniqueName: \"kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.067103 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.067308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.074269 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.086352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgzrt\" (UniqueName: \"kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt\") pod \"route-controller-manager-7b59c8d778-9vzsm\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.175975 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:33 crc kubenswrapper[4771]: I0123 13:38:33.649437 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:38:33 crc kubenswrapper[4771]: W0123 13:38:33.655493 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f46c29e_8911_4801_8ec7_ed351f4eedec.slice/crio-c835b822d2be9bde220d96732ca9137fd8f722d91e30c976e858bf8bfb620a48 WatchSource:0}: Error finding container c835b822d2be9bde220d96732ca9137fd8f722d91e30c976e858bf8bfb620a48: Status 404 returned error can't find the container with id c835b822d2be9bde220d96732ca9137fd8f722d91e30c976e858bf8bfb620a48 Jan 23 13:38:34 crc kubenswrapper[4771]: I0123 13:38:34.341138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" event={"ID":"7f46c29e-8911-4801-8ec7-ed351f4eedec","Type":"ContainerStarted","Data":"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4"} Jan 23 13:38:34 crc kubenswrapper[4771]: I0123 13:38:34.341215 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" event={"ID":"7f46c29e-8911-4801-8ec7-ed351f4eedec","Type":"ContainerStarted","Data":"c835b822d2be9bde220d96732ca9137fd8f722d91e30c976e858bf8bfb620a48"} Jan 23 13:38:34 crc kubenswrapper[4771]: I0123 13:38:34.341527 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:34 crc kubenswrapper[4771]: I0123 13:38:34.348242 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:38:34 crc kubenswrapper[4771]: I0123 13:38:34.364720 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" podStartSLOduration=5.364691586 podStartE2EDuration="5.364691586s" podCreationTimestamp="2026-01-23 13:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:38:34.359711505 +0000 UTC m=+355.382249150" watchObservedRunningTime="2026-01-23 13:38:34.364691586 +0000 UTC m=+355.387229231" Jan 23 13:38:37 crc kubenswrapper[4771]: I0123 13:38:37.900921 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:37 crc kubenswrapper[4771]: I0123 13:38:37.902284 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:37 crc kubenswrapper[4771]: I0123 13:38:37.946334 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:38 crc kubenswrapper[4771]: I0123 13:38:38.092296 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:38 crc kubenswrapper[4771]: I0123 13:38:38.092740 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:38 crc kubenswrapper[4771]: I0123 13:38:38.135853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:38 crc kubenswrapper[4771]: I0123 13:38:38.406171 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mxvj7" Jan 23 13:38:38 crc kubenswrapper[4771]: I0123 13:38:38.412682 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8jsw" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.379517 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6njm"] Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.380593 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.394449 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6njm"] Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.485885 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-catalog-content\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.486131 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qrs\" (UniqueName: \"kubernetes.io/projected/f7ac08f4-4813-42fa-939d-c93bdb71e6de-kube-api-access-n8qrs\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.486295 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-utilities\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.587245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-utilities\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.587381 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-catalog-content\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.587568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qrs\" (UniqueName: \"kubernetes.io/projected/f7ac08f4-4813-42fa-939d-c93bdb71e6de-kube-api-access-n8qrs\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.588277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-utilities\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.588302 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7ac08f4-4813-42fa-939d-c93bdb71e6de-catalog-content\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.613303 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qrs\" (UniqueName: \"kubernetes.io/projected/f7ac08f4-4813-42fa-939d-c93bdb71e6de-kube-api-access-n8qrs\") pod \"certified-operators-q6njm\" (UID: \"f7ac08f4-4813-42fa-939d-c93bdb71e6de\") " pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.701335 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.984122 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4x7x"] Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.985765 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:42 crc kubenswrapper[4771]: I0123 13:38:42.989871 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4x7x"] Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.117702 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-catalog-content\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.117762 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-utilities\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.117961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rct67\" (UniqueName: \"kubernetes.io/projected/e835f247-cf2e-43ee-9785-143a54f1dc97-kube-api-access-rct67\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.137576 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6njm"] Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.219348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-catalog-content\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.219394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-utilities\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.219491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rct67\" (UniqueName: \"kubernetes.io/projected/e835f247-cf2e-43ee-9785-143a54f1dc97-kube-api-access-rct67\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.219966 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-catalog-content\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.220014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e835f247-cf2e-43ee-9785-143a54f1dc97-utilities\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.240004 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rct67\" (UniqueName: \"kubernetes.io/projected/e835f247-cf2e-43ee-9785-143a54f1dc97-kube-api-access-rct67\") pod \"community-operators-c4x7x\" (UID: \"e835f247-cf2e-43ee-9785-143a54f1dc97\") " pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.337917 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.407695 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6njm" event={"ID":"f7ac08f4-4813-42fa-939d-c93bdb71e6de","Type":"ContainerStarted","Data":"2d7feeac312913469f6dde98c5f876e6fb240a6023f89d0acff1304c654fb601"} Jan 23 13:38:43 crc kubenswrapper[4771]: I0123 13:38:43.809218 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4x7x"] Jan 23 13:38:44 crc kubenswrapper[4771]: I0123 13:38:44.416301 4771 generic.go:334] "Generic (PLEG): container finished" podID="f7ac08f4-4813-42fa-939d-c93bdb71e6de" containerID="448243b1b6b3250eabe475291d371809eba35cd266a4d6911d1f02d05c93596a" exitCode=0 Jan 23 13:38:44 crc kubenswrapper[4771]: I0123 13:38:44.416392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6njm" event={"ID":"f7ac08f4-4813-42fa-939d-c93bdb71e6de","Type":"ContainerDied","Data":"448243b1b6b3250eabe475291d371809eba35cd266a4d6911d1f02d05c93596a"} Jan 23 13:38:44 crc kubenswrapper[4771]: I0123 13:38:44.417922 4771 generic.go:334] "Generic (PLEG): container finished" podID="e835f247-cf2e-43ee-9785-143a54f1dc97" containerID="eb6cbdb56fc60a92b59eb9986a2bf18c2a00034c0437a6ca832f96940ee26714" exitCode=0 Jan 23 13:38:44 crc kubenswrapper[4771]: I0123 13:38:44.417953 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4x7x" event={"ID":"e835f247-cf2e-43ee-9785-143a54f1dc97","Type":"ContainerDied","Data":"eb6cbdb56fc60a92b59eb9986a2bf18c2a00034c0437a6ca832f96940ee26714"} Jan 23 13:38:44 crc kubenswrapper[4771]: I0123 13:38:44.417975 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4x7x" event={"ID":"e835f247-cf2e-43ee-9785-143a54f1dc97","Type":"ContainerStarted","Data":"106d2a5dcaa835284c9cb62fb1358a613441c429a11a570364c888de7a97957e"} Jan 23 13:38:45 crc kubenswrapper[4771]: I0123 13:38:45.425903 4771 generic.go:334] "Generic (PLEG): container finished" podID="f7ac08f4-4813-42fa-939d-c93bdb71e6de" containerID="2cca545d7de5195b278a22bad7ce960d60ede32c471f424ae170886d42ebe173" exitCode=0 Jan 23 13:38:45 crc kubenswrapper[4771]: I0123 13:38:45.426061 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6njm" event={"ID":"f7ac08f4-4813-42fa-939d-c93bdb71e6de","Type":"ContainerDied","Data":"2cca545d7de5195b278a22bad7ce960d60ede32c471f424ae170886d42ebe173"} Jan 23 13:38:45 crc kubenswrapper[4771]: I0123 13:38:45.429386 4771 generic.go:334] "Generic (PLEG): container finished" podID="e835f247-cf2e-43ee-9785-143a54f1dc97" containerID="4b18f3f10d335873b6ad7c0ef89fd3bb80630987e8dc4e57b56a3255fafe4c58" exitCode=0 Jan 23 13:38:45 crc kubenswrapper[4771]: I0123 13:38:45.429462 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4x7x" event={"ID":"e835f247-cf2e-43ee-9785-143a54f1dc97","Type":"ContainerDied","Data":"4b18f3f10d335873b6ad7c0ef89fd3bb80630987e8dc4e57b56a3255fafe4c58"} Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.440161 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4x7x" event={"ID":"e835f247-cf2e-43ee-9785-143a54f1dc97","Type":"ContainerStarted","Data":"fa12d8c523563740a0fb6e314c09e373a7b7c05720ee22cc7cfc466d8a053bfe"} Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.442512 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6njm" event={"ID":"f7ac08f4-4813-42fa-939d-c93bdb71e6de","Type":"ContainerStarted","Data":"b86edabb1b81197bd7c5d8c4e389d95ad71c16523e5ad822009d5efa9693bd36"} Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.464222 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4x7x" podStartSLOduration=3.025068413 podStartE2EDuration="4.464202205s" podCreationTimestamp="2026-01-23 13:38:42 +0000 UTC" firstStartedPulling="2026-01-23 13:38:44.419917762 +0000 UTC m=+365.442455387" lastFinishedPulling="2026-01-23 13:38:45.859051554 +0000 UTC m=+366.881589179" observedRunningTime="2026-01-23 13:38:46.458094777 +0000 UTC m=+367.480632422" watchObservedRunningTime="2026-01-23 13:38:46.464202205 +0000 UTC m=+367.486739830" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.475921 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6njm" podStartSLOduration=2.979543139 podStartE2EDuration="4.475898664s" podCreationTimestamp="2026-01-23 13:38:42 +0000 UTC" firstStartedPulling="2026-01-23 13:38:44.41769671 +0000 UTC m=+365.440234335" lastFinishedPulling="2026-01-23 13:38:45.914052235 +0000 UTC m=+366.936589860" observedRunningTime="2026-01-23 13:38:46.474502488 +0000 UTC m=+367.497040123" watchObservedRunningTime="2026-01-23 13:38:46.475898664 +0000 UTC m=+367.498436299" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.780498 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dwmns"] Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.782244 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.789638 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dwmns"] Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.873075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-catalog-content\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.873150 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-utilities\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.873244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp57g\" (UniqueName: \"kubernetes.io/projected/52249bb5-fa7b-405e-a13d-be7d60744ae9-kube-api-access-bp57g\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.974371 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-catalog-content\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.974465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-utilities\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.974549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp57g\" (UniqueName: \"kubernetes.io/projected/52249bb5-fa7b-405e-a13d-be7d60744ae9-kube-api-access-bp57g\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.974942 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-catalog-content\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.974993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52249bb5-fa7b-405e-a13d-be7d60744ae9-utilities\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:46 crc kubenswrapper[4771]: I0123 13:38:46.995721 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp57g\" (UniqueName: \"kubernetes.io/projected/52249bb5-fa7b-405e-a13d-be7d60744ae9-kube-api-access-bp57g\") pod \"certified-operators-dwmns\" (UID: \"52249bb5-fa7b-405e-a13d-be7d60744ae9\") " pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.107669 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.375849 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2dzb"] Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.377271 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.380733 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5cxm\" (UniqueName: \"kubernetes.io/projected/b5601436-c1e4-461d-8e2a-23c32e5afc54-kube-api-access-d5cxm\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.380815 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-catalog-content\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.380878 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-utilities\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.388633 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2dzb"] Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.517969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-catalog-content\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.518121 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-utilities\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.518239 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5cxm\" (UniqueName: \"kubernetes.io/projected/b5601436-c1e4-461d-8e2a-23c32e5afc54-kube-api-access-d5cxm\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.518536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-catalog-content\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.518589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5601436-c1e4-461d-8e2a-23c32e5afc54-utilities\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.539900 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5cxm\" (UniqueName: \"kubernetes.io/projected/b5601436-c1e4-461d-8e2a-23c32e5afc54-kube-api-access-d5cxm\") pod \"community-operators-b2dzb\" (UID: \"b5601436-c1e4-461d-8e2a-23c32e5afc54\") " pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.547973 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dwmns"] Jan 23 13:38:47 crc kubenswrapper[4771]: W0123 13:38:47.555467 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52249bb5_fa7b_405e_a13d_be7d60744ae9.slice/crio-72fec13a7dad377614e79e73c8f0d0997b34a5759bef080f26b9b8aea620a1ec WatchSource:0}: Error finding container 72fec13a7dad377614e79e73c8f0d0997b34a5759bef080f26b9b8aea620a1ec: Status 404 returned error can't find the container with id 72fec13a7dad377614e79e73c8f0d0997b34a5759bef080f26b9b8aea620a1ec Jan 23 13:38:47 crc kubenswrapper[4771]: I0123 13:38:47.694131 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.104321 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2dzb"] Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.311074 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.311677 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" podUID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" containerName="controller-manager" containerID="cri-o://babc3287f4be19c1e233b05eb258a0844df71352771436be4be3453ecfb30c93" gracePeriod=30 Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.457115 4771 generic.go:334] "Generic (PLEG): container finished" podID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" containerID="babc3287f4be19c1e233b05eb258a0844df71352771436be4be3453ecfb30c93" exitCode=0 Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.457202 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" event={"ID":"5f9baa2e-f04b-4770-a50d-8ca39e51abb4","Type":"ContainerDied","Data":"babc3287f4be19c1e233b05eb258a0844df71352771436be4be3453ecfb30c93"} Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.458387 4771 generic.go:334] "Generic (PLEG): container finished" podID="b5601436-c1e4-461d-8e2a-23c32e5afc54" containerID="e9855b538a19b4e7f39d8e44989dc19e4a6133539c5efe81ff608c3fb085ba5b" exitCode=0 Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.458463 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2dzb" event={"ID":"b5601436-c1e4-461d-8e2a-23c32e5afc54","Type":"ContainerDied","Data":"e9855b538a19b4e7f39d8e44989dc19e4a6133539c5efe81ff608c3fb085ba5b"} Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.458479 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2dzb" event={"ID":"b5601436-c1e4-461d-8e2a-23c32e5afc54","Type":"ContainerStarted","Data":"2d008f42739ada5b397fede7a7ca4629636f70793324d51f08d1eaa4062dd998"} Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.460299 4771 generic.go:334] "Generic (PLEG): container finished" podID="52249bb5-fa7b-405e-a13d-be7d60744ae9" containerID="3571a7f2d9dc0bba73aba26ac31ac07ee0e82fab5607cf328f6f50100d1a6d8a" exitCode=0 Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.460321 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmns" event={"ID":"52249bb5-fa7b-405e-a13d-be7d60744ae9","Type":"ContainerDied","Data":"3571a7f2d9dc0bba73aba26ac31ac07ee0e82fab5607cf328f6f50100d1a6d8a"} Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.460336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmns" event={"ID":"52249bb5-fa7b-405e-a13d-be7d60744ae9","Type":"ContainerStarted","Data":"72fec13a7dad377614e79e73c8f0d0997b34a5759bef080f26b9b8aea620a1ec"} Jan 23 13:38:48 crc kubenswrapper[4771]: I0123 13:38:48.847362 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.039347 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config\") pod \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.039498 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles\") pod \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.039580 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca\") pod \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.039612 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fw5r\" (UniqueName: \"kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r\") pod \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.039643 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert\") pod \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\" (UID: \"5f9baa2e-f04b-4770-a50d-8ca39e51abb4\") " Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.040444 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5f9baa2e-f04b-4770-a50d-8ca39e51abb4" (UID: "5f9baa2e-f04b-4770-a50d-8ca39e51abb4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.040572 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config" (OuterVolumeSpecName: "config") pod "5f9baa2e-f04b-4770-a50d-8ca39e51abb4" (UID: "5f9baa2e-f04b-4770-a50d-8ca39e51abb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.040621 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca" (OuterVolumeSpecName: "client-ca") pod "5f9baa2e-f04b-4770-a50d-8ca39e51abb4" (UID: "5f9baa2e-f04b-4770-a50d-8ca39e51abb4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.046841 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5f9baa2e-f04b-4770-a50d-8ca39e51abb4" (UID: "5f9baa2e-f04b-4770-a50d-8ca39e51abb4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.046861 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r" (OuterVolumeSpecName: "kube-api-access-7fw5r") pod "5f9baa2e-f04b-4770-a50d-8ca39e51abb4" (UID: "5f9baa2e-f04b-4770-a50d-8ca39e51abb4"). InnerVolumeSpecName "kube-api-access-7fw5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.141344 4771 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.141382 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.141392 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fw5r\" (UniqueName: \"kubernetes.io/projected/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-kube-api-access-7fw5r\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.141403 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.141431 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f9baa2e-f04b-4770-a50d-8ca39e51abb4-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.468334 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.468323 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv" event={"ID":"5f9baa2e-f04b-4770-a50d-8ca39e51abb4","Type":"ContainerDied","Data":"ef060b2738d23bdd3f24ecf126a7369474ccfddf7c9ad0035a5bc544bf2a2aca"} Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.468474 4771 scope.go:117] "RemoveContainer" containerID="babc3287f4be19c1e233b05eb258a0844df71352771436be4be3453ecfb30c93" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.473628 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2dzb" event={"ID":"b5601436-c1e4-461d-8e2a-23c32e5afc54","Type":"ContainerStarted","Data":"341f7e7f008d4571b42ffe3904032d48c4d04e8fdd5ef77959b2f8ae31c5646f"} Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.476555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmns" event={"ID":"52249bb5-fa7b-405e-a13d-be7d60744ae9","Type":"ContainerStarted","Data":"86b94a0f256fe71153518b5e2d97a033998eaf02dfab4ccf76507e8d7f031b10"} Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.513107 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.516287 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f666bcdf6-vrwfv"] Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.864797 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq"] Jan 23 13:38:49 crc kubenswrapper[4771]: E0123 13:38:49.865153 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" containerName="controller-manager" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.865177 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" containerName="controller-manager" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.865455 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" containerName="controller-manager" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.866058 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.871789 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.872368 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.872375 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.872397 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.873212 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.882141 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.882464 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 13:38:49 crc kubenswrapper[4771]: I0123 13:38:49.893755 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq"] Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.054173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.054266 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b060dab-5951-4ccf-b98e-f8f34fe50793-serving-cert\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.054374 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-client-ca\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.054401 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-config\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.054444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5xv\" (UniqueName: \"kubernetes.io/projected/2b060dab-5951-4ccf-b98e-f8f34fe50793-kube-api-access-xj5xv\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.155968 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj5xv\" (UniqueName: \"kubernetes.io/projected/2b060dab-5951-4ccf-b98e-f8f34fe50793-kube-api-access-xj5xv\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.156362 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.156566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b060dab-5951-4ccf-b98e-f8f34fe50793-serving-cert\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.156751 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-client-ca\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.156908 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-config\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.159007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-config\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.160575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-client-ca\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.161872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b060dab-5951-4ccf-b98e-f8f34fe50793-proxy-ca-bundles\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.170583 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b060dab-5951-4ccf-b98e-f8f34fe50793-serving-cert\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.178033 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj5xv\" (UniqueName: \"kubernetes.io/projected/2b060dab-5951-4ccf-b98e-f8f34fe50793-kube-api-access-xj5xv\") pod \"controller-manager-7fcfbf5659-bxbfq\" (UID: \"2b060dab-5951-4ccf-b98e-f8f34fe50793\") " pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.186981 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.436101 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq"] Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.487739 4771 generic.go:334] "Generic (PLEG): container finished" podID="52249bb5-fa7b-405e-a13d-be7d60744ae9" containerID="86b94a0f256fe71153518b5e2d97a033998eaf02dfab4ccf76507e8d7f031b10" exitCode=0 Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.487836 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmns" event={"ID":"52249bb5-fa7b-405e-a13d-be7d60744ae9","Type":"ContainerDied","Data":"86b94a0f256fe71153518b5e2d97a033998eaf02dfab4ccf76507e8d7f031b10"} Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.489946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" event={"ID":"2b060dab-5951-4ccf-b98e-f8f34fe50793","Type":"ContainerStarted","Data":"0db6648e8b2b189aa6ff237bcffbe8d5e636dcd2e7f707f4bebaa6d97e4aa98d"} Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.506721 4771 generic.go:334] "Generic (PLEG): container finished" podID="b5601436-c1e4-461d-8e2a-23c32e5afc54" containerID="341f7e7f008d4571b42ffe3904032d48c4d04e8fdd5ef77959b2f8ae31c5646f" exitCode=0 Jan 23 13:38:50 crc kubenswrapper[4771]: I0123 13:38:50.506808 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2dzb" event={"ID":"b5601436-c1e4-461d-8e2a-23c32e5afc54","Type":"ContainerDied","Data":"341f7e7f008d4571b42ffe3904032d48c4d04e8fdd5ef77959b2f8ae31c5646f"} Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.181343 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-96tmq"] Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.182854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.190311 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96tmq"] Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.238725 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9baa2e-f04b-4770-a50d-8ca39e51abb4" path="/var/lib/kubelet/pods/5f9baa2e-f04b-4770-a50d-8ca39e51abb4/volumes" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.372288 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-catalog-content\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.372461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-utilities\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.372535 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjhv4\" (UniqueName: \"kubernetes.io/projected/9fab9170-3473-4996-b3c4-55b1f7b4f05a-kube-api-access-bjhv4\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.473680 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjhv4\" (UniqueName: \"kubernetes.io/projected/9fab9170-3473-4996-b3c4-55b1f7b4f05a-kube-api-access-bjhv4\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.473777 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-catalog-content\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.473799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-utilities\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.474275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-catalog-content\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.474308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fab9170-3473-4996-b3c4-55b1f7b4f05a-utilities\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.500367 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjhv4\" (UniqueName: \"kubernetes.io/projected/9fab9170-3473-4996-b3c4-55b1f7b4f05a-kube-api-access-bjhv4\") pod \"certified-operators-96tmq\" (UID: \"9fab9170-3473-4996-b3c4-55b1f7b4f05a\") " pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.514843 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" event={"ID":"2b060dab-5951-4ccf-b98e-f8f34fe50793","Type":"ContainerStarted","Data":"c0f51614c4878ca5d06e113fa71af2011737181b08b750f9a40fb8cfa92b26dc"} Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.515129 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.520646 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.534452 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fcfbf5659-bxbfq" podStartSLOduration=3.5344284740000003 podStartE2EDuration="3.534428474s" podCreationTimestamp="2026-01-23 13:38:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:38:51.531399356 +0000 UTC m=+372.553936981" watchObservedRunningTime="2026-01-23 13:38:51.534428474 +0000 UTC m=+372.556966099" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.778474 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5d5v"] Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.779876 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.790163 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5d5v"] Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.808140 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.979956 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-catalog-content\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.980249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-utilities\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:51 crc kubenswrapper[4771]: I0123 13:38:51.980286 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6xd\" (UniqueName: \"kubernetes.io/projected/5be36885-6a91-48ec-b6af-a5abcbaaed97-kube-api-access-5r6xd\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.038898 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-96tmq"] Jan 23 13:38:52 crc kubenswrapper[4771]: W0123 13:38:52.046689 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fab9170_3473_4996_b3c4_55b1f7b4f05a.slice/crio-228123572fb3c6276b1ed415b5382ed81888ecaf1c3517424ff21636c0c4368c WatchSource:0}: Error finding container 228123572fb3c6276b1ed415b5382ed81888ecaf1c3517424ff21636c0c4368c: Status 404 returned error can't find the container with id 228123572fb3c6276b1ed415b5382ed81888ecaf1c3517424ff21636c0c4368c Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.085210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6xd\" (UniqueName: \"kubernetes.io/projected/5be36885-6a91-48ec-b6af-a5abcbaaed97-kube-api-access-5r6xd\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.085330 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-catalog-content\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.085379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-utilities\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.086104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-utilities\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.086740 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be36885-6a91-48ec-b6af-a5abcbaaed97-catalog-content\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.108271 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6xd\" (UniqueName: \"kubernetes.io/projected/5be36885-6a91-48ec-b6af-a5abcbaaed97-kube-api-access-5r6xd\") pod \"community-operators-j5d5v\" (UID: \"5be36885-6a91-48ec-b6af-a5abcbaaed97\") " pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.395754 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.529273 4771 generic.go:334] "Generic (PLEG): container finished" podID="9fab9170-3473-4996-b3c4-55b1f7b4f05a" containerID="919ff9686917850d4792f4bd5a90e7e79a62bca369ad5dbf3239bdacee7f70cf" exitCode=0 Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.529364 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96tmq" event={"ID":"9fab9170-3473-4996-b3c4-55b1f7b4f05a","Type":"ContainerDied","Data":"919ff9686917850d4792f4bd5a90e7e79a62bca369ad5dbf3239bdacee7f70cf"} Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.529907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96tmq" event={"ID":"9fab9170-3473-4996-b3c4-55b1f7b4f05a","Type":"ContainerStarted","Data":"228123572fb3c6276b1ed415b5382ed81888ecaf1c3517424ff21636c0c4368c"} Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.533859 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2dzb" event={"ID":"b5601436-c1e4-461d-8e2a-23c32e5afc54","Type":"ContainerStarted","Data":"c6ef18429e82ee23042ed5756e93d5c6cf547f2890ad79c3697b6c4a6d38cec3"} Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.543610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dwmns" event={"ID":"52249bb5-fa7b-405e-a13d-be7d60744ae9","Type":"ContainerStarted","Data":"4d561a418a9da87898832ad20173418c7a2eb0e3d0707cb0e3e4e792beea87d1"} Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.584144 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2dzb" podStartSLOduration=2.53579606 podStartE2EDuration="5.584124242s" podCreationTimestamp="2026-01-23 13:38:47 +0000 UTC" firstStartedPulling="2026-01-23 13:38:48.461101462 +0000 UTC m=+369.483639087" lastFinishedPulling="2026-01-23 13:38:51.509429624 +0000 UTC m=+372.531967269" observedRunningTime="2026-01-23 13:38:52.580253427 +0000 UTC m=+373.602791052" watchObservedRunningTime="2026-01-23 13:38:52.584124242 +0000 UTC m=+373.606661867" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.702561 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.703308 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.750097 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.768993 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dwmns" podStartSLOduration=3.637390881 podStartE2EDuration="6.768973079s" podCreationTimestamp="2026-01-23 13:38:46 +0000 UTC" firstStartedPulling="2026-01-23 13:38:48.471139877 +0000 UTC m=+369.493677502" lastFinishedPulling="2026-01-23 13:38:51.602722075 +0000 UTC m=+372.625259700" observedRunningTime="2026-01-23 13:38:52.614513007 +0000 UTC m=+373.637050652" watchObservedRunningTime="2026-01-23 13:38:52.768973079 +0000 UTC m=+373.791510724" Jan 23 13:38:52 crc kubenswrapper[4771]: I0123 13:38:52.851377 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5d5v"] Jan 23 13:38:52 crc kubenswrapper[4771]: W0123 13:38:52.859231 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be36885_6a91_48ec_b6af_a5abcbaaed97.slice/crio-18e0eafdf7dc3c6cf20a70ad694a7e5c1d1a2b0afb79b0e29b27c0c811ad0842 WatchSource:0}: Error finding container 18e0eafdf7dc3c6cf20a70ad694a7e5c1d1a2b0afb79b0e29b27c0c811ad0842: Status 404 returned error can't find the container with id 18e0eafdf7dc3c6cf20a70ad694a7e5c1d1a2b0afb79b0e29b27c0c811ad0842 Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.338766 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.339170 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.404186 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.551598 4771 generic.go:334] "Generic (PLEG): container finished" podID="5be36885-6a91-48ec-b6af-a5abcbaaed97" containerID="5f9789c502683e5e83c75b9b7d36e84ac1141072d5c1680812ecbecbae2d2339" exitCode=0 Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.551682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5d5v" event={"ID":"5be36885-6a91-48ec-b6af-a5abcbaaed97","Type":"ContainerDied","Data":"5f9789c502683e5e83c75b9b7d36e84ac1141072d5c1680812ecbecbae2d2339"} Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.551714 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5d5v" event={"ID":"5be36885-6a91-48ec-b6af-a5abcbaaed97","Type":"ContainerStarted","Data":"18e0eafdf7dc3c6cf20a70ad694a7e5c1d1a2b0afb79b0e29b27c0c811ad0842"} Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.556571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96tmq" event={"ID":"9fab9170-3473-4996-b3c4-55b1f7b4f05a","Type":"ContainerStarted","Data":"858b2b4ac8e3f99a4166224771dc4164e142d7fcb916565909fc76399df52f2b"} Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.610434 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6njm" Jan 23 13:38:53 crc kubenswrapper[4771]: I0123 13:38:53.624012 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4x7x" Jan 23 13:38:54 crc kubenswrapper[4771]: I0123 13:38:54.578450 4771 generic.go:334] "Generic (PLEG): container finished" podID="9fab9170-3473-4996-b3c4-55b1f7b4f05a" containerID="858b2b4ac8e3f99a4166224771dc4164e142d7fcb916565909fc76399df52f2b" exitCode=0 Jan 23 13:38:54 crc kubenswrapper[4771]: I0123 13:38:54.578615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96tmq" event={"ID":"9fab9170-3473-4996-b3c4-55b1f7b4f05a","Type":"ContainerDied","Data":"858b2b4ac8e3f99a4166224771dc4164e142d7fcb916565909fc76399df52f2b"} Jan 23 13:38:54 crc kubenswrapper[4771]: I0123 13:38:54.583478 4771 generic.go:334] "Generic (PLEG): container finished" podID="5be36885-6a91-48ec-b6af-a5abcbaaed97" containerID="f7097d5f8d54b540f63273aae9570d517ff35aa9c08408f184ef4e3fc745ecc9" exitCode=0 Jan 23 13:38:54 crc kubenswrapper[4771]: I0123 13:38:54.583882 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5d5v" event={"ID":"5be36885-6a91-48ec-b6af-a5abcbaaed97","Type":"ContainerDied","Data":"f7097d5f8d54b540f63273aae9570d517ff35aa9c08408f184ef4e3fc745ecc9"} Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.591359 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5d5v" event={"ID":"5be36885-6a91-48ec-b6af-a5abcbaaed97","Type":"ContainerStarted","Data":"7cb2f1a35c8a3ae3f80d487c5892a9d2c419a2d4fc622723fb8f2de075f81215"} Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.595809 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-96tmq" event={"ID":"9fab9170-3473-4996-b3c4-55b1f7b4f05a","Type":"ContainerStarted","Data":"36de230a39eb9938b3709c60a687ac585fda3ebb64dc13adb83ed19379895c13"} Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.614187 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5d5v" podStartSLOduration=3.187428963 podStartE2EDuration="4.614166333s" podCreationTimestamp="2026-01-23 13:38:51 +0000 UTC" firstStartedPulling="2026-01-23 13:38:53.553466739 +0000 UTC m=+374.576004364" lastFinishedPulling="2026-01-23 13:38:54.980204109 +0000 UTC m=+376.002741734" observedRunningTime="2026-01-23 13:38:55.61195573 +0000 UTC m=+376.634493365" watchObservedRunningTime="2026-01-23 13:38:55.614166333 +0000 UTC m=+376.636703968" Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.633339 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-96tmq" podStartSLOduration=2.147929424 podStartE2EDuration="4.633317353s" podCreationTimestamp="2026-01-23 13:38:51 +0000 UTC" firstStartedPulling="2026-01-23 13:38:52.530910899 +0000 UTC m=+373.553448524" lastFinishedPulling="2026-01-23 13:38:55.016298828 +0000 UTC m=+376.038836453" observedRunningTime="2026-01-23 13:38:55.630088628 +0000 UTC m=+376.652626263" watchObservedRunningTime="2026-01-23 13:38:55.633317353 +0000 UTC m=+376.655854988" Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.777153 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ch8sp"] Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.778313 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.789049 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ch8sp"] Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.936590 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs56x\" (UniqueName: \"kubernetes.io/projected/26a8628d-5423-49ce-b186-d31351516531-kube-api-access-cs56x\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.936654 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-utilities\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:55 crc kubenswrapper[4771]: I0123 13:38:55.936720 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-catalog-content\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.037937 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs56x\" (UniqueName: \"kubernetes.io/projected/26a8628d-5423-49ce-b186-d31351516531-kube-api-access-cs56x\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.038000 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-utilities\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.038021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-catalog-content\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.038522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-catalog-content\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.038998 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a8628d-5423-49ce-b186-d31351516531-utilities\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.074491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs56x\" (UniqueName: \"kubernetes.io/projected/26a8628d-5423-49ce-b186-d31351516531-kube-api-access-cs56x\") pod \"certified-operators-ch8sp\" (UID: \"26a8628d-5423-49ce-b186-d31351516531\") " pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.095141 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:38:56 crc kubenswrapper[4771]: I0123 13:38:56.604722 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ch8sp"] Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.108046 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.108475 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.152293 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.607586 4771 generic.go:334] "Generic (PLEG): container finished" podID="26a8628d-5423-49ce-b186-d31351516531" containerID="da92320109af3d33f9d55cefe5e092e7203b68016c1c36fd86fdbf4237daeb21" exitCode=0 Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.607669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ch8sp" event={"ID":"26a8628d-5423-49ce-b186-d31351516531","Type":"ContainerDied","Data":"da92320109af3d33f9d55cefe5e092e7203b68016c1c36fd86fdbf4237daeb21"} Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.607712 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ch8sp" event={"ID":"26a8628d-5423-49ce-b186-d31351516531","Type":"ContainerStarted","Data":"50bdc2fb1a48c07d427d82095a829fcd1b8ac1868f870afdac468d1fc5d62d5c"} Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.650587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dwmns" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.694795 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.694857 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:57 crc kubenswrapper[4771]: I0123 13:38:57.734345 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:58 crc kubenswrapper[4771]: I0123 13:38:58.614597 4771 generic.go:334] "Generic (PLEG): container finished" podID="26a8628d-5423-49ce-b186-d31351516531" containerID="92b65f92e91e39f6ee1a2058f769c7535ecfce6371c26a68b8d4825db81fb741" exitCode=0 Jan 23 13:38:58 crc kubenswrapper[4771]: I0123 13:38:58.614795 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ch8sp" event={"ID":"26a8628d-5423-49ce-b186-d31351516531","Type":"ContainerDied","Data":"92b65f92e91e39f6ee1a2058f769c7535ecfce6371c26a68b8d4825db81fb741"} Jan 23 13:38:58 crc kubenswrapper[4771]: I0123 13:38:58.672587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2dzb" Jan 23 13:38:59 crc kubenswrapper[4771]: I0123 13:38:59.622400 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ch8sp" event={"ID":"26a8628d-5423-49ce-b186-d31351516531","Type":"ContainerStarted","Data":"be0ba707d0a00193dd9248f79825ab32e40e68898e60bb265019ac5ebafb447e"} Jan 23 13:38:59 crc kubenswrapper[4771]: I0123 13:38:59.646300 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ch8sp" podStartSLOduration=2.900040799 podStartE2EDuration="4.646276748s" podCreationTimestamp="2026-01-23 13:38:55 +0000 UTC" firstStartedPulling="2026-01-23 13:38:57.610119039 +0000 UTC m=+378.632656664" lastFinishedPulling="2026-01-23 13:38:59.356354988 +0000 UTC m=+380.378892613" observedRunningTime="2026-01-23 13:38:59.641548105 +0000 UTC m=+380.664085720" watchObservedRunningTime="2026-01-23 13:38:59.646276748 +0000 UTC m=+380.668814383" Jan 23 13:39:00 crc kubenswrapper[4771]: I0123 13:39:00.312320 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:39:00 crc kubenswrapper[4771]: I0123 13:39:00.312396 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:39:01 crc kubenswrapper[4771]: I0123 13:39:01.808532 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:39:01 crc kubenswrapper[4771]: I0123 13:39:01.808915 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:39:01 crc kubenswrapper[4771]: I0123 13:39:01.852787 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:39:02 crc kubenswrapper[4771]: I0123 13:39:02.395957 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:39:02 crc kubenswrapper[4771]: I0123 13:39:02.396016 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:39:02 crc kubenswrapper[4771]: I0123 13:39:02.447164 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:39:02 crc kubenswrapper[4771]: I0123 13:39:02.684138 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-96tmq" Jan 23 13:39:02 crc kubenswrapper[4771]: I0123 13:39:02.702328 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5d5v" Jan 23 13:39:06 crc kubenswrapper[4771]: I0123 13:39:06.096354 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:39:06 crc kubenswrapper[4771]: I0123 13:39:06.096562 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:39:06 crc kubenswrapper[4771]: I0123 13:39:06.145825 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:39:06 crc kubenswrapper[4771]: I0123 13:39:06.706376 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ch8sp" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.326919 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.328039 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" podUID="7f46c29e-8911-4801-8ec7-ed351f4eedec" containerName="route-controller-manager" containerID="cri-o://b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4" gracePeriod=30 Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.695898 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.721028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert\") pod \"7f46c29e-8911-4801-8ec7-ed351f4eedec\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.721077 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config\") pod \"7f46c29e-8911-4801-8ec7-ed351f4eedec\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.721133 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca\") pod \"7f46c29e-8911-4801-8ec7-ed351f4eedec\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.721166 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgzrt\" (UniqueName: \"kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt\") pod \"7f46c29e-8911-4801-8ec7-ed351f4eedec\" (UID: \"7f46c29e-8911-4801-8ec7-ed351f4eedec\") " Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.722038 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f46c29e-8911-4801-8ec7-ed351f4eedec" (UID: "7f46c29e-8911-4801-8ec7-ed351f4eedec"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.722207 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config" (OuterVolumeSpecName: "config") pod "7f46c29e-8911-4801-8ec7-ed351f4eedec" (UID: "7f46c29e-8911-4801-8ec7-ed351f4eedec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.759315 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f46c29e-8911-4801-8ec7-ed351f4eedec" (UID: "7f46c29e-8911-4801-8ec7-ed351f4eedec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.759065 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt" (OuterVolumeSpecName: "kube-api-access-wgzrt") pod "7f46c29e-8911-4801-8ec7-ed351f4eedec" (UID: "7f46c29e-8911-4801-8ec7-ed351f4eedec"). InnerVolumeSpecName "kube-api-access-wgzrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.804565 4771 generic.go:334] "Generic (PLEG): container finished" podID="7f46c29e-8911-4801-8ec7-ed351f4eedec" containerID="b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4" exitCode=0 Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.804618 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" event={"ID":"7f46c29e-8911-4801-8ec7-ed351f4eedec","Type":"ContainerDied","Data":"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4"} Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.804640 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.804666 4771 scope.go:117] "RemoveContainer" containerID="b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.804652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm" event={"ID":"7f46c29e-8911-4801-8ec7-ed351f4eedec","Type":"ContainerDied","Data":"c835b822d2be9bde220d96732ca9137fd8f722d91e30c976e858bf8bfb620a48"} Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.822054 4771 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f46c29e-8911-4801-8ec7-ed351f4eedec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.822083 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.822095 4771 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f46c29e-8911-4801-8ec7-ed351f4eedec-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.822107 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgzrt\" (UniqueName: \"kubernetes.io/projected/7f46c29e-8911-4801-8ec7-ed351f4eedec-kube-api-access-wgzrt\") on node \"crc\" DevicePath \"\"" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.826105 4771 scope.go:117] "RemoveContainer" containerID="b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4" Jan 23 13:39:28 crc kubenswrapper[4771]: E0123 13:39:28.826648 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4\": container with ID starting with b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4 not found: ID does not exist" containerID="b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.826683 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4"} err="failed to get container status \"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4\": rpc error: code = NotFound desc = could not find container \"b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4\": container with ID starting with b7d31a352ab700ae122994ba0b8cbf607e00c7da030f0e2086c278272287a4b4 not found: ID does not exist" Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.829800 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:39:28 crc kubenswrapper[4771]: I0123 13:39:28.834994 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b59c8d778-9vzsm"] Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.234695 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f46c29e-8911-4801-8ec7-ed351f4eedec" path="/var/lib/kubelet/pods/7f46c29e-8911-4801-8ec7-ed351f4eedec/volumes" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.900804 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7"] Jan 23 13:39:29 crc kubenswrapper[4771]: E0123 13:39:29.901207 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f46c29e-8911-4801-8ec7-ed351f4eedec" containerName="route-controller-manager" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.901225 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f46c29e-8911-4801-8ec7-ed351f4eedec" containerName="route-controller-manager" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.901395 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f46c29e-8911-4801-8ec7-ed351f4eedec" containerName="route-controller-manager" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.902084 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.905812 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.943191 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.943385 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.943641 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.943747 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.943857 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.944099 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-config\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.944156 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f81af401-ee42-4368-aeed-b119960bd887-serving-cert\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.944200 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphtr\" (UniqueName: \"kubernetes.io/projected/f81af401-ee42-4368-aeed-b119960bd887-kube-api-access-qphtr\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.944287 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-client-ca\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:29 crc kubenswrapper[4771]: I0123 13:39:29.950000 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7"] Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.045446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-config\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.045512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f81af401-ee42-4368-aeed-b119960bd887-serving-cert\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.045574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphtr\" (UniqueName: \"kubernetes.io/projected/f81af401-ee42-4368-aeed-b119960bd887-kube-api-access-qphtr\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.045630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-client-ca\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.046628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-client-ca\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.048066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f81af401-ee42-4368-aeed-b119960bd887-config\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.050800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f81af401-ee42-4368-aeed-b119960bd887-serving-cert\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.070957 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphtr\" (UniqueName: \"kubernetes.io/projected/f81af401-ee42-4368-aeed-b119960bd887-kube-api-access-qphtr\") pod \"route-controller-manager-6987b54645-s2gc7\" (UID: \"f81af401-ee42-4368-aeed-b119960bd887\") " pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.259111 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.311879 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.311945 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.311996 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.312682 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.312748 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1" gracePeriod=600 Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.669580 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7"] Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.824018 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" event={"ID":"f81af401-ee42-4368-aeed-b119960bd887","Type":"ContainerStarted","Data":"9f4e005f44573a8455670a694866b7c2b89c54152f109dbdd01ac94c48cbb876"} Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.824070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" event={"ID":"f81af401-ee42-4368-aeed-b119960bd887","Type":"ContainerStarted","Data":"3679dc15c5ea31faeec92f0a4840bc9fb5a8cec9efd09e009653b33d0e448c11"} Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.824828 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.828996 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1" exitCode=0 Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.829036 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1"} Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.829059 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7"} Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.829076 4771 scope.go:117] "RemoveContainer" containerID="fb777362fa7298175ae4d0bfe9cce32c35468d758ee4ce37aaa60a12c1222235" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.831352 4771 patch_prober.go:28] interesting pod/route-controller-manager-6987b54645-s2gc7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.76:8443/healthz\": dial tcp 10.217.0.76:8443: connect: connection refused" start-of-body= Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.831395 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" podUID="f81af401-ee42-4368-aeed-b119960bd887" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.76:8443/healthz\": dial tcp 10.217.0.76:8443: connect: connection refused" Jan 23 13:39:30 crc kubenswrapper[4771]: I0123 13:39:30.843813 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" podStartSLOduration=2.843787118 podStartE2EDuration="2.843787118s" podCreationTimestamp="2026-01-23 13:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:39:30.841805153 +0000 UTC m=+411.864342788" watchObservedRunningTime="2026-01-23 13:39:30.843787118 +0000 UTC m=+411.866324753" Jan 23 13:39:31 crc kubenswrapper[4771]: I0123 13:39:31.845104 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6987b54645-s2gc7" Jan 23 13:41:30 crc kubenswrapper[4771]: I0123 13:41:30.311652 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:41:30 crc kubenswrapper[4771]: I0123 13:41:30.314190 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:41:39 crc kubenswrapper[4771]: I0123 13:41:39.453478 4771 scope.go:117] "RemoveContainer" containerID="abc9dd69238f5cc36c402cf0edee1e97067cb75f989d23fb723a0c2cccd20198" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.225931 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-njl7x"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.227214 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-njl7x" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.234642 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.234798 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-9zks2" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.234966 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.243579 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.244579 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.258767 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xtkjm" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.259041 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-g98gg"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.260570 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.260637 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.266153 4771 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bz6xs" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.267852 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-njl7x"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.276087 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcldv\" (UniqueName: \"kubernetes.io/projected/28bae9dd-6a27-42bc-b05c-9e14f92a5afe-kube-api-access-gcldv\") pod \"cert-manager-webhook-687f57d79b-g98gg\" (UID: \"28bae9dd-6a27-42bc-b05c-9e14f92a5afe\") " pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.276147 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7k5b\" (UniqueName: \"kubernetes.io/projected/caf0d360-cd4f-4a23-8104-162c00e9b1b3-kube-api-access-m7k5b\") pod \"cert-manager-858654f9db-njl7x\" (UID: \"caf0d360-cd4f-4a23-8104-162c00e9b1b3\") " pod="cert-manager/cert-manager-858654f9db-njl7x" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.276173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2q6w\" (UniqueName: \"kubernetes.io/projected/a4f83ec4-9e43-4e12-a479-5df0667e28f9-kube-api-access-n2q6w\") pod \"cert-manager-cainjector-cf98fcc89-xfpd8\" (UID: \"a4f83ec4-9e43-4e12-a479-5df0667e28f9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.283455 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-g98gg"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.377750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcldv\" (UniqueName: \"kubernetes.io/projected/28bae9dd-6a27-42bc-b05c-9e14f92a5afe-kube-api-access-gcldv\") pod \"cert-manager-webhook-687f57d79b-g98gg\" (UID: \"28bae9dd-6a27-42bc-b05c-9e14f92a5afe\") " pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.378029 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7k5b\" (UniqueName: \"kubernetes.io/projected/caf0d360-cd4f-4a23-8104-162c00e9b1b3-kube-api-access-m7k5b\") pod \"cert-manager-858654f9db-njl7x\" (UID: \"caf0d360-cd4f-4a23-8104-162c00e9b1b3\") " pod="cert-manager/cert-manager-858654f9db-njl7x" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.378130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2q6w\" (UniqueName: \"kubernetes.io/projected/a4f83ec4-9e43-4e12-a479-5df0667e28f9-kube-api-access-n2q6w\") pod \"cert-manager-cainjector-cf98fcc89-xfpd8\" (UID: \"a4f83ec4-9e43-4e12-a479-5df0667e28f9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.398601 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7k5b\" (UniqueName: \"kubernetes.io/projected/caf0d360-cd4f-4a23-8104-162c00e9b1b3-kube-api-access-m7k5b\") pod \"cert-manager-858654f9db-njl7x\" (UID: \"caf0d360-cd4f-4a23-8104-162c00e9b1b3\") " pod="cert-manager/cert-manager-858654f9db-njl7x" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.398620 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcldv\" (UniqueName: \"kubernetes.io/projected/28bae9dd-6a27-42bc-b05c-9e14f92a5afe-kube-api-access-gcldv\") pod \"cert-manager-webhook-687f57d79b-g98gg\" (UID: \"28bae9dd-6a27-42bc-b05c-9e14f92a5afe\") " pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.404273 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2q6w\" (UniqueName: \"kubernetes.io/projected/a4f83ec4-9e43-4e12-a479-5df0667e28f9-kube-api-access-n2q6w\") pod \"cert-manager-cainjector-cf98fcc89-xfpd8\" (UID: \"a4f83ec4-9e43-4e12-a479-5df0667e28f9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.547548 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-njl7x" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.581396 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.593087 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.791847 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-njl7x"] Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.806899 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:41:48 crc kubenswrapper[4771]: I0123 13:41:48.833593 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-g98gg"] Jan 23 13:41:48 crc kubenswrapper[4771]: W0123 13:41:48.840587 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bae9dd_6a27_42bc_b05c_9e14f92a5afe.slice/crio-35d6e3d04d843f17253e11cbea01976f524adfaaba8d09bf1b5c70fb5c5a81fe WatchSource:0}: Error finding container 35d6e3d04d843f17253e11cbea01976f524adfaaba8d09bf1b5c70fb5c5a81fe: Status 404 returned error can't find the container with id 35d6e3d04d843f17253e11cbea01976f524adfaaba8d09bf1b5c70fb5c5a81fe Jan 23 13:41:49 crc kubenswrapper[4771]: I0123 13:41:49.093065 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8"] Jan 23 13:41:49 crc kubenswrapper[4771]: W0123 13:41:49.098192 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4f83ec4_9e43_4e12_a479_5df0667e28f9.slice/crio-04b547403d0bf4035a2990dc8a80db0cfee7b36f2217ed42488df7d76115c0b5 WatchSource:0}: Error finding container 04b547403d0bf4035a2990dc8a80db0cfee7b36f2217ed42488df7d76115c0b5: Status 404 returned error can't find the container with id 04b547403d0bf4035a2990dc8a80db0cfee7b36f2217ed42488df7d76115c0b5 Jan 23 13:41:49 crc kubenswrapper[4771]: I0123 13:41:49.741535 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" event={"ID":"28bae9dd-6a27-42bc-b05c-9e14f92a5afe","Type":"ContainerStarted","Data":"35d6e3d04d843f17253e11cbea01976f524adfaaba8d09bf1b5c70fb5c5a81fe"} Jan 23 13:41:49 crc kubenswrapper[4771]: I0123 13:41:49.742965 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" event={"ID":"a4f83ec4-9e43-4e12-a479-5df0667e28f9","Type":"ContainerStarted","Data":"04b547403d0bf4035a2990dc8a80db0cfee7b36f2217ed42488df7d76115c0b5"} Jan 23 13:41:49 crc kubenswrapper[4771]: I0123 13:41:49.743831 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-njl7x" event={"ID":"caf0d360-cd4f-4a23-8104-162c00e9b1b3","Type":"ContainerStarted","Data":"d5100ead980577c74c65393fd9c170b558c8d9b35e868939287505f8ae17f644"} Jan 23 13:41:52 crc kubenswrapper[4771]: I0123 13:41:52.762200 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-njl7x" event={"ID":"caf0d360-cd4f-4a23-8104-162c00e9b1b3","Type":"ContainerStarted","Data":"68963ec741f85a2b61cf621b01151e77cd82705f50cb00bd9bd5fddd5332ddb4"} Jan 23 13:41:52 crc kubenswrapper[4771]: I0123 13:41:52.765718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" event={"ID":"28bae9dd-6a27-42bc-b05c-9e14f92a5afe","Type":"ContainerStarted","Data":"20b9f4052d9d9182a66ed9421fe453d4b5d306febb8965896dde7eb82805f69c"} Jan 23 13:41:52 crc kubenswrapper[4771]: I0123 13:41:52.765951 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:52 crc kubenswrapper[4771]: I0123 13:41:52.786728 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-njl7x" podStartSLOduration=1.252077699 podStartE2EDuration="4.786695367s" podCreationTimestamp="2026-01-23 13:41:48 +0000 UTC" firstStartedPulling="2026-01-23 13:41:48.806635598 +0000 UTC m=+549.829173223" lastFinishedPulling="2026-01-23 13:41:52.341253266 +0000 UTC m=+553.363790891" observedRunningTime="2026-01-23 13:41:52.77899863 +0000 UTC m=+553.801536265" watchObservedRunningTime="2026-01-23 13:41:52.786695367 +0000 UTC m=+553.809233012" Jan 23 13:41:52 crc kubenswrapper[4771]: I0123 13:41:52.802687 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" podStartSLOduration=1.306036053 podStartE2EDuration="4.80266499s" podCreationTimestamp="2026-01-23 13:41:48 +0000 UTC" firstStartedPulling="2026-01-23 13:41:48.843165642 +0000 UTC m=+549.865703287" lastFinishedPulling="2026-01-23 13:41:52.339794599 +0000 UTC m=+553.362332224" observedRunningTime="2026-01-23 13:41:52.799304471 +0000 UTC m=+553.821842106" watchObservedRunningTime="2026-01-23 13:41:52.80266499 +0000 UTC m=+553.825202615" Jan 23 13:41:53 crc kubenswrapper[4771]: I0123 13:41:53.776742 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" event={"ID":"a4f83ec4-9e43-4e12-a479-5df0667e28f9","Type":"ContainerStarted","Data":"774968545fc9c6bbf28d9a7177998f280b03e6a0f94f239133608e195f390fca"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.010493 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-xfpd8" podStartSLOduration=6.079297434 podStartE2EDuration="10.010468033s" podCreationTimestamp="2026-01-23 13:41:48 +0000 UTC" firstStartedPulling="2026-01-23 13:41:49.101619144 +0000 UTC m=+550.124156769" lastFinishedPulling="2026-01-23 13:41:53.032789743 +0000 UTC m=+554.055327368" observedRunningTime="2026-01-23 13:41:53.796773938 +0000 UTC m=+554.819311553" watchObservedRunningTime="2026-01-23 13:41:58.010468033 +0000 UTC m=+559.033005678" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.014936 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbvcq"] Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015324 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-controller" containerID="cri-o://85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015367 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="sbdb" containerID="cri-o://19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015448 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015466 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="nbdb" containerID="cri-o://bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015492 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="northd" containerID="cri-o://9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015522 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-node" containerID="cri-o://3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.015506 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-acl-logging" containerID="cri-o://1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.063623 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" containerID="cri-o://c2154643b4cc41a9aa58b5a1db17f5fca6204c67bf8fb95a4bd7c8a2dc0276c0" gracePeriod=30 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.596908 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-g98gg" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.973451 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/2.log" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.973906 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/1.log" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.973946 4771 generic.go:334] "Generic (PLEG): container finished" podID="803fce37-afd3-4ce0-9135-ccb3831e206c" containerID="28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5" exitCode=2 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.974005 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerDied","Data":"28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.974054 4771 scope.go:117] "RemoveContainer" containerID="a60a136dc4bbd01620d825cbf1a9aeb738b6203a638f9f07e266873850861615" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.975037 4771 scope.go:117] "RemoveContainer" containerID="28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5" Jan 23 13:41:58 crc kubenswrapper[4771]: E0123 13:41:58.975389 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5dzz5_openshift-multus(803fce37-afd3-4ce0-9135-ccb3831e206c)\"" pod="openshift-multus/multus-5dzz5" podUID="803fce37-afd3-4ce0-9135-ccb3831e206c" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.977521 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovnkube-controller/3.log" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.979952 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-acl-logging/0.log" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.980737 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-controller/0.log" Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981084 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="c2154643b4cc41a9aa58b5a1db17f5fca6204c67bf8fb95a4bd7c8a2dc0276c0" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981103 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981112 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981120 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981128 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981134 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade" exitCode=0 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981141 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c" exitCode=143 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981150 4771 generic.go:334] "Generic (PLEG): container finished" podID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerID="85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88" exitCode=143 Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981153 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"c2154643b4cc41a9aa58b5a1db17f5fca6204c67bf8fb95a4bd7c8a2dc0276c0"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981194 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981212 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981225 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981252 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981267 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c"} Jan 23 13:41:58 crc kubenswrapper[4771]: I0123 13:41:58.981278 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88"} Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.116348 4771 scope.go:117] "RemoveContainer" containerID="93f6ce1ad06b14461538899f88f3cfb6fa6d501a57407727b065af728f19fe91" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.448815 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-acl-logging/0.log" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.449843 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-controller/0.log" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.450448 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534060 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534166 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534201 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534233 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534260 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534253 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534322 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534362 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534403 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534449 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534475 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4bww\" (UniqueName: \"kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534499 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534562 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534631 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534663 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534690 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534721 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534745 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.534768 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes\") pod \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\" (UID: \"4ba84e18-6300-433f-98d7-f1a2ddd0073c\") " Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535017 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535079 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535140 4771 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535158 4771 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535171 4771 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535185 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535268 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535364 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.535741 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.536801 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.536868 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket" (OuterVolumeSpecName: "log-socket") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.536943 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log" (OuterVolumeSpecName: "node-log") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.536973 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537026 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537007 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash" (OuterVolumeSpecName: "host-slash") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537052 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537081 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537106 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.537125 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.539843 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4trqh"] Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540124 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540145 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540156 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540163 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540173 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="nbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540180 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="nbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540191 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540197 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540206 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="sbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540212 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="sbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540222 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540227 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540237 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="northd" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540243 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="northd" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540252 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-acl-logging" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540259 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-acl-logging" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540267 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kubecfg-setup" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540272 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kubecfg-setup" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540280 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-node" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540287 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-node" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540298 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540305 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540398 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-node" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540423 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540430 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="nbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540435 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="northd" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540444 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="sbdb" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540451 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540458 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540464 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540472 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540481 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-acl-logging" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540489 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovn-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540585 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540591 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: E0123 13:41:59.540601 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540607 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.540723 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" containerName="ovnkube-controller" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.542421 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.543464 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.543523 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww" (OuterVolumeSpecName: "kube-api-access-g4bww") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "kube-api-access-g4bww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.561949 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4ba84e18-6300-433f-98d7-f1a2ddd0073c" (UID: "4ba84e18-6300-433f-98d7-f1a2ddd0073c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636732 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636804 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-systemd-units\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-var-lib-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636870 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-netns\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-config\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.636986 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-netd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637075 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-slash\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-env-overrides\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-etc-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637215 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-systemd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637282 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/c496e376-ffed-4403-b461-f02df206a736-kube-api-access-6s8wx\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637368 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637435 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-ovn\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496e376-ffed-4403-b461-f02df206a736-ovn-node-metrics-cert\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-script-lib\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637569 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-bin\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637601 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-log-socket\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637636 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-node-log\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637687 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-kubelet\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637787 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637815 4771 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637831 4771 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637844 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4bww\" (UniqueName: \"kubernetes.io/projected/4ba84e18-6300-433f-98d7-f1a2ddd0073c-kube-api-access-g4bww\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637856 4771 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637869 4771 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637881 4771 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637894 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637906 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637922 4771 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637932 4771 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637943 4771 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637958 4771 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637974 4771 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.637987 4771 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ba84e18-6300-433f-98d7-f1a2ddd0073c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.638001 4771 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ba84e18-6300-433f-98d7-f1a2ddd0073c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739349 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-systemd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739489 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-systemd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/c496e376-ffed-4403-b461-f02df206a736-kube-api-access-6s8wx\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-ovn\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739594 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496e376-ffed-4403-b461-f02df206a736-ovn-node-metrics-cert\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739629 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-script-lib\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-bin\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-log-socket\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-kubelet\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-node-log\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-systemd-units\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-ovn\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739783 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-kubelet\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-var-lib-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-log-socket\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739833 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739854 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-node-log\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-systemd-units\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739762 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-var-lib-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-netns\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739923 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-run-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739940 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-config\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.739982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-bin\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-netd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-netns\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740025 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-slash\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740069 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-env-overrides\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740087 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-etc-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740176 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-etc-openvswitch\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-cni-netd\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740223 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-slash\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740629 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c496e376-ffed-4403-b461-f02df206a736-host-run-ovn-kubernetes\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740874 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-env-overrides\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.740938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-script-lib\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.741077 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c496e376-ffed-4403-b461-f02df206a736-ovnkube-config\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.744827 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c496e376-ffed-4403-b461-f02df206a736-ovn-node-metrics-cert\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.756485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/c496e376-ffed-4403-b461-f02df206a736-kube-api-access-6s8wx\") pod \"ovnkube-node-4trqh\" (UID: \"c496e376-ffed-4403-b461-f02df206a736\") " pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.897187 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:41:59 crc kubenswrapper[4771]: W0123 13:41:59.920813 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc496e376_ffed_4403_b461_f02df206a736.slice/crio-f2a97a5ffb43b721fb3a53e7c5ec8d38fdccee70bae18eb72ad79be735bab7f5 WatchSource:0}: Error finding container f2a97a5ffb43b721fb3a53e7c5ec8d38fdccee70bae18eb72ad79be735bab7f5: Status 404 returned error can't find the container with id f2a97a5ffb43b721fb3a53e7c5ec8d38fdccee70bae18eb72ad79be735bab7f5 Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.991455 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"f2a97a5ffb43b721fb3a53e7c5ec8d38fdccee70bae18eb72ad79be735bab7f5"} Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.997788 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-acl-logging/0.log" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.998440 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qbvcq_4ba84e18-6300-433f-98d7-f1a2ddd0073c/ovn-controller/0.log" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.998844 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" event={"ID":"4ba84e18-6300-433f-98d7-f1a2ddd0073c","Type":"ContainerDied","Data":"26753b794314c9314f55ee549252c6309c081a3afd46d5e7d434727d53deb321"} Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.998892 4771 scope.go:117] "RemoveContainer" containerID="c2154643b4cc41a9aa58b5a1db17f5fca6204c67bf8fb95a4bd7c8a2dc0276c0" Jan 23 13:41:59 crc kubenswrapper[4771]: I0123 13:41:59.998997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbvcq" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.001388 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/2.log" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.033786 4771 scope.go:117] "RemoveContainer" containerID="19f27619ec1ea386ce4038b2f71bd3e25b444f6d107dbd96ea62b6966d98eca3" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.059309 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbvcq"] Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.061473 4771 scope.go:117] "RemoveContainer" containerID="bab449b824f8bdbe5c7a46dbc86dd53e3c93d5c8edfef2930ea1f1ca119babc3" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.069270 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbvcq"] Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.076792 4771 scope.go:117] "RemoveContainer" containerID="9578c3cf10ab260dd45fbce3a6de7453e326b6a7c7b36a43c6ed4d4621529773" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.091513 4771 scope.go:117] "RemoveContainer" containerID="0ffdb746adaeb39eb32ad909efc4164a4a3f2874c46ca5a87fdaefae34a350e6" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.104825 4771 scope.go:117] "RemoveContainer" containerID="3b8a6204a77a4c7243d4854861d00d85b731687a0699172dee42ec488809dade" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.118289 4771 scope.go:117] "RemoveContainer" containerID="1453743885c809c90984f778b4e074aa0468cdaeee4de2ad9b5b97ce2ab36c0c" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.137084 4771 scope.go:117] "RemoveContainer" containerID="85487bca55a18aff7b2f5cd69b328f8c000e7f8e8dc7e00c0b39369cd9ef8e88" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.162250 4771 scope.go:117] "RemoveContainer" containerID="8822fd272c2d5723596a273c3a2a760c0eb405b63ca5cc8b01875f4d40f3c052" Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.312088 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:42:00 crc kubenswrapper[4771]: I0123 13:42:00.312380 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:42:01 crc kubenswrapper[4771]: I0123 13:42:01.011314 4771 generic.go:334] "Generic (PLEG): container finished" podID="c496e376-ffed-4403-b461-f02df206a736" containerID="1718ebfb2d27443b726db01e2ba209f8200dc09e1c0b06832d13c02c0dd47a80" exitCode=0 Jan 23 13:42:01 crc kubenswrapper[4771]: I0123 13:42:01.011403 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerDied","Data":"1718ebfb2d27443b726db01e2ba209f8200dc09e1c0b06832d13c02c0dd47a80"} Jan 23 13:42:01 crc kubenswrapper[4771]: I0123 13:42:01.238660 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ba84e18-6300-433f-98d7-f1a2ddd0073c" path="/var/lib/kubelet/pods/4ba84e18-6300-433f-98d7-f1a2ddd0073c/volumes" Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"8d9079fa429216b44c58a06be7bf16c2aad99d60a32bebfd551e8f36fa5cf834"} Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022504 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"2d4f4aa2c1cd84183ef34743c7aa7dfc8db8e865a61668abea540f5a946b5c23"} Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022515 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"471f14b3f9cce0e4b1426444fad88e793cb75edbfc015c5f070e7e4f93f095bb"} Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"05a3b89eb8c86b8fc28b9c521e4f115d1560ba19c9e13a0375a7d138205b414c"} Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022533 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"e8addbe6d16db33146d958d1db016c2d12a0c07e8cda1988258d7d144268df48"} Jan 23 13:42:02 crc kubenswrapper[4771]: I0123 13:42:02.022541 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"af105bd5afe42f705fc6cd9c7b4b8f9a574e20d554f95068705e4d2f5acb94fb"} Jan 23 13:42:05 crc kubenswrapper[4771]: I0123 13:42:05.044350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"5eebdbca4237c5235e2a0839226411a784c1bc94da338a2384f000680e3fd2e8"} Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.062515 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" event={"ID":"c496e376-ffed-4403-b461-f02df206a736","Type":"ContainerStarted","Data":"7a9377b56c4662d881c395e5edd3bd8ad24affe8c5b7575dc11f7974c1188bd1"} Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.063162 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.063177 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.063187 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.099103 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" podStartSLOduration=8.099081228 podStartE2EDuration="8.099081228s" podCreationTimestamp="2026-01-23 13:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:42:07.096289259 +0000 UTC m=+568.118826914" watchObservedRunningTime="2026-01-23 13:42:07.099081228 +0000 UTC m=+568.121618863" Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.104129 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:07 crc kubenswrapper[4771]: I0123 13:42:07.104359 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:14 crc kubenswrapper[4771]: I0123 13:42:14.228303 4771 scope.go:117] "RemoveContainer" containerID="28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5" Jan 23 13:42:14 crc kubenswrapper[4771]: E0123 13:42:14.229152 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5dzz5_openshift-multus(803fce37-afd3-4ce0-9135-ccb3831e206c)\"" pod="openshift-multus/multus-5dzz5" podUID="803fce37-afd3-4ce0-9135-ccb3831e206c" Jan 23 13:42:27 crc kubenswrapper[4771]: I0123 13:42:27.228065 4771 scope.go:117] "RemoveContainer" containerID="28ac912c2e3ef2dca670bbcb9e317bc6920fefb80666b05c8f726b30575a2dc5" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.179227 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s"] Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.180835 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.183388 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.195828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s"] Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.200304 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5dzz5_803fce37-afd3-4ce0-9135-ccb3831e206c/kube-multus/2.log" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.200357 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5dzz5" event={"ID":"803fce37-afd3-4ce0-9135-ccb3831e206c","Type":"ContainerStarted","Data":"bedb5954a92bebf532d03f3f019d1e984a6dbf4d8b7054361829080d57803027"} Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.245610 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.245730 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lqj\" (UniqueName: \"kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.245784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.346730 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lqj\" (UniqueName: \"kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.346851 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.346913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.347380 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.347445 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.364527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lqj\" (UniqueName: \"kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: I0123 13:42:28.497996 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: E0123 13:42:28.523863 4771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(636eed54dfb0f1c57e43eb9d3e9c553a17439f8f820bb4eb39bf3c8ddc15a95b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 13:42:28 crc kubenswrapper[4771]: E0123 13:42:28.523952 4771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(636eed54dfb0f1c57e43eb9d3e9c553a17439f8f820bb4eb39bf3c8ddc15a95b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: E0123 13:42:28.523978 4771 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(636eed54dfb0f1c57e43eb9d3e9c553a17439f8f820bb4eb39bf3c8ddc15a95b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:28 crc kubenswrapper[4771]: E0123 13:42:28.524050 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace(f5bf361f-9ef1-4f6f-bc47-c428011faeac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace(f5bf361f-9ef1-4f6f-bc47-c428011faeac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(636eed54dfb0f1c57e43eb9d3e9c553a17439f8f820bb4eb39bf3c8ddc15a95b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" Jan 23 13:42:29 crc kubenswrapper[4771]: I0123 13:42:29.209537 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:29 crc kubenswrapper[4771]: I0123 13:42:29.210266 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:29 crc kubenswrapper[4771]: E0123 13:42:29.242770 4771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(99efd0845cfe5d874e4f917a3e728df7eb1c3a797164933e4115bb31d01817f4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 13:42:29 crc kubenswrapper[4771]: E0123 13:42:29.242867 4771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(99efd0845cfe5d874e4f917a3e728df7eb1c3a797164933e4115bb31d01817f4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:29 crc kubenswrapper[4771]: E0123 13:42:29.242908 4771 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(99efd0845cfe5d874e4f917a3e728df7eb1c3a797164933e4115bb31d01817f4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:29 crc kubenswrapper[4771]: E0123 13:42:29.243011 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace(f5bf361f-9ef1-4f6f-bc47-c428011faeac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace(f5bf361f-9ef1-4f6f-bc47-c428011faeac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_openshift-marketplace_f5bf361f-9ef1-4f6f-bc47-c428011faeac_0(99efd0845cfe5d874e4f917a3e728df7eb1c3a797164933e4115bb31d01817f4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" Jan 23 13:42:29 crc kubenswrapper[4771]: I0123 13:42:29.919857 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4trqh" Jan 23 13:42:30 crc kubenswrapper[4771]: I0123 13:42:30.312567 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:42:30 crc kubenswrapper[4771]: I0123 13:42:30.312653 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:42:30 crc kubenswrapper[4771]: I0123 13:42:30.312706 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:42:30 crc kubenswrapper[4771]: I0123 13:42:30.313361 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:42:30 crc kubenswrapper[4771]: I0123 13:42:30.313434 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7" gracePeriod=600 Jan 23 13:42:31 crc kubenswrapper[4771]: I0123 13:42:31.223871 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7" exitCode=0 Jan 23 13:42:31 crc kubenswrapper[4771]: I0123 13:42:31.223955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7"} Jan 23 13:42:31 crc kubenswrapper[4771]: I0123 13:42:31.224275 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef"} Jan 23 13:42:31 crc kubenswrapper[4771]: I0123 13:42:31.224303 4771 scope.go:117] "RemoveContainer" containerID="1b44f9611bbafce674e71ca1e8d34068dfa0d63956d90aaa82888afd111bd7d1" Jan 23 13:42:40 crc kubenswrapper[4771]: I0123 13:42:40.227637 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:40 crc kubenswrapper[4771]: I0123 13:42:40.229079 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:40 crc kubenswrapper[4771]: I0123 13:42:40.665855 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s"] Jan 23 13:42:41 crc kubenswrapper[4771]: I0123 13:42:41.296904 4771 generic.go:334] "Generic (PLEG): container finished" podID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerID="5aed448c3ab61bc597cbb007e8a4fc2c22d1613c17876a9e4646a107ad3db609" exitCode=0 Jan 23 13:42:41 crc kubenswrapper[4771]: I0123 13:42:41.297017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" event={"ID":"f5bf361f-9ef1-4f6f-bc47-c428011faeac","Type":"ContainerDied","Data":"5aed448c3ab61bc597cbb007e8a4fc2c22d1613c17876a9e4646a107ad3db609"} Jan 23 13:42:41 crc kubenswrapper[4771]: I0123 13:42:41.297374 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" event={"ID":"f5bf361f-9ef1-4f6f-bc47-c428011faeac","Type":"ContainerStarted","Data":"57cdab94f985767015be28205e01fb44f068dc57ee1961aafe209c898fbf836a"} Jan 23 13:42:43 crc kubenswrapper[4771]: I0123 13:42:43.316219 4771 generic.go:334] "Generic (PLEG): container finished" podID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerID="f71a0da291e4c41e94c26452d1bcac929f62d8e99708b274eeda3e30bb00d51a" exitCode=0 Jan 23 13:42:43 crc kubenswrapper[4771]: I0123 13:42:43.316290 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" event={"ID":"f5bf361f-9ef1-4f6f-bc47-c428011faeac","Type":"ContainerDied","Data":"f71a0da291e4c41e94c26452d1bcac929f62d8e99708b274eeda3e30bb00d51a"} Jan 23 13:42:44 crc kubenswrapper[4771]: I0123 13:42:44.328803 4771 generic.go:334] "Generic (PLEG): container finished" podID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerID="979b905201264eae785874d2c84e73abb3aaefdaf9e98ef27fb97fb18aa5a2f3" exitCode=0 Jan 23 13:42:44 crc kubenswrapper[4771]: I0123 13:42:44.328889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" event={"ID":"f5bf361f-9ef1-4f6f-bc47-c428011faeac","Type":"ContainerDied","Data":"979b905201264eae785874d2c84e73abb3aaefdaf9e98ef27fb97fb18aa5a2f3"} Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.606008 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.726844 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util\") pod \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.726978 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle\") pod \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.727024 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5lqj\" (UniqueName: \"kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj\") pod \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\" (UID: \"f5bf361f-9ef1-4f6f-bc47-c428011faeac\") " Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.730052 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle" (OuterVolumeSpecName: "bundle") pod "f5bf361f-9ef1-4f6f-bc47-c428011faeac" (UID: "f5bf361f-9ef1-4f6f-bc47-c428011faeac"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.733457 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj" (OuterVolumeSpecName: "kube-api-access-v5lqj") pod "f5bf361f-9ef1-4f6f-bc47-c428011faeac" (UID: "f5bf361f-9ef1-4f6f-bc47-c428011faeac"). InnerVolumeSpecName "kube-api-access-v5lqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.746506 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util" (OuterVolumeSpecName: "util") pod "f5bf361f-9ef1-4f6f-bc47-c428011faeac" (UID: "f5bf361f-9ef1-4f6f-bc47-c428011faeac"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.829640 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5lqj\" (UniqueName: \"kubernetes.io/projected/f5bf361f-9ef1-4f6f-bc47-c428011faeac-kube-api-access-v5lqj\") on node \"crc\" DevicePath \"\"" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.829999 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-util\") on node \"crc\" DevicePath \"\"" Jan 23 13:42:45 crc kubenswrapper[4771]: I0123 13:42:45.830012 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f5bf361f-9ef1-4f6f-bc47-c428011faeac-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:42:46 crc kubenswrapper[4771]: I0123 13:42:46.346748 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" event={"ID":"f5bf361f-9ef1-4f6f-bc47-c428011faeac","Type":"ContainerDied","Data":"57cdab94f985767015be28205e01fb44f068dc57ee1961aafe209c898fbf836a"} Jan 23 13:42:46 crc kubenswrapper[4771]: I0123 13:42:46.346815 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57cdab94f985767015be28205e01fb44f068dc57ee1961aafe209c898fbf836a" Jan 23 13:42:46 crc kubenswrapper[4771]: I0123 13:42:46.346857 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.452964 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm"] Jan 23 13:42:56 crc kubenswrapper[4771]: E0123 13:42:56.453893 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="extract" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.453910 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="extract" Jan 23 13:42:56 crc kubenswrapper[4771]: E0123 13:42:56.453926 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="util" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.453934 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="util" Jan 23 13:42:56 crc kubenswrapper[4771]: E0123 13:42:56.453949 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="pull" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.453957 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="pull" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.454083 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bf361f-9ef1-4f6f-bc47-c428011faeac" containerName="extract" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.454590 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.456662 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.456742 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.456898 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2c66w" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.477487 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.504940 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.505687 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:56 crc kubenswrapper[4771]: W0123 13:42:56.508421 4771 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-bfc4f": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-bfc4f" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 23 13:42:56 crc kubenswrapper[4771]: E0123 13:42:56.508459 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-bfc4f\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-dockercfg-bfc4f\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:42:56 crc kubenswrapper[4771]: W0123 13:42:56.508519 4771 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 23 13:42:56 crc kubenswrapper[4771]: E0123 13:42:56.508532 4771 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.518296 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.519041 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.530475 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.533605 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.577617 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.577876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.578005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfdfh\" (UniqueName: \"kubernetes.io/projected/6b8c9e01-91e4-466a-bcca-f2302f8cf535-kube-api-access-kfdfh\") pod \"obo-prometheus-operator-68bc856cb9-w6ztm\" (UID: \"6b8c9e01-91e4-466a-bcca-f2302f8cf535\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.578081 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.578144 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.645736 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tldhc"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.646639 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.649507 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-sbxn7" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.650860 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.663274 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tldhc"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.678967 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfdfh\" (UniqueName: \"kubernetes.io/projected/6b8c9e01-91e4-466a-bcca-f2302f8cf535-kube-api-access-kfdfh\") pod \"obo-prometheus-operator-68bc856cb9-w6ztm\" (UID: \"6b8c9e01-91e4-466a-bcca-f2302f8cf535\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.679033 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.679067 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.679105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.679128 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.706510 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfdfh\" (UniqueName: \"kubernetes.io/projected/6b8c9e01-91e4-466a-bcca-f2302f8cf535-kube-api-access-kfdfh\") pod \"obo-prometheus-operator-68bc856cb9-w6ztm\" (UID: \"6b8c9e01-91e4-466a-bcca-f2302f8cf535\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.774151 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.780486 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.780661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrs6j\" (UniqueName: \"kubernetes.io/projected/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-kube-api-access-hrs6j\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.881792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.881842 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrs6j\" (UniqueName: \"kubernetes.io/projected/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-kube-api-access-hrs6j\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.887267 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hp46t"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.894690 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.899341 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-5x299" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.900459 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.905109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrs6j\" (UniqueName: \"kubernetes.io/projected/e93a101d-0d44-4b57-8ec6-bd9911f2e61b-kube-api-access-hrs6j\") pod \"observability-operator-59bdc8b94-tldhc\" (UID: \"e93a101d-0d44-4b57-8ec6-bd9911f2e61b\") " pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.925217 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hp46t"] Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.962699 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.984134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8p9\" (UniqueName: \"kubernetes.io/projected/4d0ca4c2-0c2e-438c-a300-71ec0d624905-kube-api-access-mw8p9\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:56 crc kubenswrapper[4771]: I0123 13:42:56.984217 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d0ca4c2-0c2e-438c-a300-71ec0d624905-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.085181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8p9\" (UniqueName: \"kubernetes.io/projected/4d0ca4c2-0c2e-438c-a300-71ec0d624905-kube-api-access-mw8p9\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.085575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d0ca4c2-0c2e-438c-a300-71ec0d624905-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.087112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d0ca4c2-0c2e-438c-a300-71ec0d624905-openshift-service-ca\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.109546 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8p9\" (UniqueName: \"kubernetes.io/projected/4d0ca4c2-0c2e-438c-a300-71ec0d624905-kube-api-access-mw8p9\") pod \"perses-operator-5bf474d74f-hp46t\" (UID: \"4d0ca4c2-0c2e-438c-a300-71ec0d624905\") " pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.206810 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm"] Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.260738 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.345001 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tldhc"] Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.425772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" event={"ID":"6b8c9e01-91e4-466a-bcca-f2302f8cf535","Type":"ContainerStarted","Data":"9f3392d29fe8228a7d51aa97e992b39797066be23348b5089fcb002d30563e23"} Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.429564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" event={"ID":"e93a101d-0d44-4b57-8ec6-bd9911f2e61b","Type":"ContainerStarted","Data":"7e9a741f5042cd08bab1e809944f7e57f05881f4917d385591afdffe88b96533"} Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.524558 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-hp46t"] Jan 23 13:42:57 crc kubenswrapper[4771]: W0123 13:42:57.531125 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0ca4c2_0c2e_438c_a300_71ec0d624905.slice/crio-f1fbd131ccd67b4e2b9d182ba36326ee034a054dd1db3b458000e28bf113d8d8 WatchSource:0}: Error finding container f1fbd131ccd67b4e2b9d182ba36326ee034a054dd1db3b458000e28bf113d8d8: Status 404 returned error can't find the container with id f1fbd131ccd67b4e2b9d182ba36326ee034a054dd1db3b458000e28bf113d8d8 Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679470 4771 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679566 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert podName:f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b nodeName:}" failed. No retries permitted until 2026-01-23 13:42:58.179543169 +0000 UTC m=+619.202080794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert") pod "obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" (UID: "f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b") : failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679574 4771 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679635 4771 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679709 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert podName:f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b nodeName:}" failed. No retries permitted until 2026-01-23 13:42:58.179681183 +0000 UTC m=+619.202219018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" (UID: "f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b") : failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.679752 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert podName:1b61dbbd-9e69-43e6-9a83-68115a11bef6 nodeName:}" failed. No retries permitted until 2026-01-23 13:42:58.179720515 +0000 UTC m=+619.202258320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert") pod "obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" (UID: "1b61dbbd-9e69-43e6-9a83-68115a11bef6") : failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.680284 4771 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: E0123 13:42:57.680391 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert podName:1b61dbbd-9e69-43e6-9a83-68115a11bef6 nodeName:}" failed. No retries permitted until 2026-01-23 13:42:58.180366246 +0000 UTC m=+619.202904061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" (UID: "1b61dbbd-9e69-43e6-9a83-68115a11bef6") : failed to sync secret cache: timed out waiting for the condition Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.710045 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-bfc4f" Jan 23 13:42:57 crc kubenswrapper[4771]: I0123 13:42:57.910016 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.204245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.204578 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.204632 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.204667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.210205 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.210238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq\" (UID: \"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.210333 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.211003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b61dbbd-9e69-43e6-9a83-68115a11bef6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8\" (UID: \"1b61dbbd-9e69-43e6-9a83-68115a11bef6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.322458 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.333480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.451333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" event={"ID":"4d0ca4c2-0c2e-438c-a300-71ec0d624905","Type":"ContainerStarted","Data":"f1fbd131ccd67b4e2b9d182ba36326ee034a054dd1db3b458000e28bf113d8d8"} Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.600523 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8"] Jan 23 13:42:58 crc kubenswrapper[4771]: W0123 13:42:58.640175 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b61dbbd_9e69_43e6_9a83_68115a11bef6.slice/crio-c7d9f9b6c5c5ff424298e77f353b162645f2c87ad3495044a684d90e8f0ce99c WatchSource:0}: Error finding container c7d9f9b6c5c5ff424298e77f353b162645f2c87ad3495044a684d90e8f0ce99c: Status 404 returned error can't find the container with id c7d9f9b6c5c5ff424298e77f353b162645f2c87ad3495044a684d90e8f0ce99c Jan 23 13:42:58 crc kubenswrapper[4771]: I0123 13:42:58.692898 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq"] Jan 23 13:42:58 crc kubenswrapper[4771]: W0123 13:42:58.714540 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63b51b7_c5c0_48e1_bd12_7d0cb3dfc23b.slice/crio-902c90b696f157cac398b1dd7ce9174c6694f1528c00b92f5690f2e3c851ff92 WatchSource:0}: Error finding container 902c90b696f157cac398b1dd7ce9174c6694f1528c00b92f5690f2e3c851ff92: Status 404 returned error can't find the container with id 902c90b696f157cac398b1dd7ce9174c6694f1528c00b92f5690f2e3c851ff92 Jan 23 13:42:59 crc kubenswrapper[4771]: I0123 13:42:59.480967 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" event={"ID":"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b","Type":"ContainerStarted","Data":"902c90b696f157cac398b1dd7ce9174c6694f1528c00b92f5690f2e3c851ff92"} Jan 23 13:42:59 crc kubenswrapper[4771]: I0123 13:42:59.496007 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" event={"ID":"1b61dbbd-9e69-43e6-9a83-68115a11bef6","Type":"ContainerStarted","Data":"c7d9f9b6c5c5ff424298e77f353b162645f2c87ad3495044a684d90e8f0ce99c"} Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.656701 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" event={"ID":"1b61dbbd-9e69-43e6-9a83-68115a11bef6","Type":"ContainerStarted","Data":"80601235f4f8e13b2990e27a3887bf92efb712620e03906d88dc68252df65da5"} Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.658162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" event={"ID":"f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b","Type":"ContainerStarted","Data":"68509bf64d79dd15febada7fbcdccf52c0ee2c48f976b41696c4688a59b5c4ac"} Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.659482 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" event={"ID":"4d0ca4c2-0c2e-438c-a300-71ec0d624905","Type":"ContainerStarted","Data":"3a5e0e8040119b6d610f82656f165732338cf3e626f031c793c21577a48491dc"} Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.659530 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.661056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" event={"ID":"e93a101d-0d44-4b57-8ec6-bd9911f2e61b","Type":"ContainerStarted","Data":"4d229f705395b957c00060d7b78d2bf2a768ac5a18344642bf8863d73ed20c86"} Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.661248 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.662443 4771 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-tldhc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.39:8081/healthz\": dial tcp 10.217.0.39:8081: connect: connection refused" start-of-body= Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.662489 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" podUID="e93a101d-0d44-4b57-8ec6-bd9911f2e61b" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.39:8081/healthz\": dial tcp 10.217.0.39:8081: connect: connection refused" Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.697732 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8" podStartSLOduration=3.05409537 podStartE2EDuration="14.697711129s" podCreationTimestamp="2026-01-23 13:42:56 +0000 UTC" firstStartedPulling="2026-01-23 13:42:58.677022456 +0000 UTC m=+619.699560081" lastFinishedPulling="2026-01-23 13:43:10.320638215 +0000 UTC m=+631.343175840" observedRunningTime="2026-01-23 13:43:10.695334742 +0000 UTC m=+631.717872377" watchObservedRunningTime="2026-01-23 13:43:10.697711129 +0000 UTC m=+631.720248754" Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.732774 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq" podStartSLOduration=3.173670122 podStartE2EDuration="14.732752215s" podCreationTimestamp="2026-01-23 13:42:56 +0000 UTC" firstStartedPulling="2026-01-23 13:42:58.716904947 +0000 UTC m=+619.739442562" lastFinishedPulling="2026-01-23 13:43:10.27598703 +0000 UTC m=+631.298524655" observedRunningTime="2026-01-23 13:43:10.727465465 +0000 UTC m=+631.750003110" watchObservedRunningTime="2026-01-23 13:43:10.732752215 +0000 UTC m=+631.755289860" Jan 23 13:43:10 crc kubenswrapper[4771]: I0123 13:43:10.765390 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" podStartSLOduration=1.835637734 podStartE2EDuration="14.765359082s" podCreationTimestamp="2026-01-23 13:42:56 +0000 UTC" firstStartedPulling="2026-01-23 13:42:57.3468276 +0000 UTC m=+618.369365225" lastFinishedPulling="2026-01-23 13:43:10.276548948 +0000 UTC m=+631.299086573" observedRunningTime="2026-01-23 13:43:10.758616316 +0000 UTC m=+631.781153941" watchObservedRunningTime="2026-01-23 13:43:10.765359082 +0000 UTC m=+631.787896707" Jan 23 13:43:11 crc kubenswrapper[4771]: I0123 13:43:11.671917 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" event={"ID":"6b8c9e01-91e4-466a-bcca-f2302f8cf535","Type":"ContainerStarted","Data":"bdaaae423fd01eb2d4be23ab9f390b458c2d3af9e3322964e3b5e4508bae5b00"} Jan 23 13:43:11 crc kubenswrapper[4771]: I0123 13:43:11.688567 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tldhc" Jan 23 13:43:11 crc kubenswrapper[4771]: I0123 13:43:11.698189 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" podStartSLOduration=2.954375346 podStartE2EDuration="15.698166821s" podCreationTimestamp="2026-01-23 13:42:56 +0000 UTC" firstStartedPulling="2026-01-23 13:42:57.533074294 +0000 UTC m=+618.555611919" lastFinishedPulling="2026-01-23 13:43:10.276865769 +0000 UTC m=+631.299403394" observedRunningTime="2026-01-23 13:43:10.7923607 +0000 UTC m=+631.814898335" watchObservedRunningTime="2026-01-23 13:43:11.698166821 +0000 UTC m=+632.720704446" Jan 23 13:43:11 crc kubenswrapper[4771]: I0123 13:43:11.698722 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-w6ztm" podStartSLOduration=2.580270387 podStartE2EDuration="15.698717409s" podCreationTimestamp="2026-01-23 13:42:56 +0000 UTC" firstStartedPulling="2026-01-23 13:42:57.229256532 +0000 UTC m=+618.251794157" lastFinishedPulling="2026-01-23 13:43:10.347703554 +0000 UTC m=+631.370241179" observedRunningTime="2026-01-23 13:43:11.697595543 +0000 UTC m=+632.720133168" watchObservedRunningTime="2026-01-23 13:43:11.698717409 +0000 UTC m=+632.721255034" Jan 23 13:43:17 crc kubenswrapper[4771]: I0123 13:43:17.263951 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-hp46t" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.028984 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj"] Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.030638 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.034824 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.049627 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj"] Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.088883 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.088969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.089118 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcq2h\" (UniqueName: \"kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.190547 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.190625 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.190671 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcq2h\" (UniqueName: \"kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.191525 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.191788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.216815 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcq2h\" (UniqueName: \"kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.346796 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.773768 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj"] Jan 23 13:43:35 crc kubenswrapper[4771]: I0123 13:43:35.834728 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" event={"ID":"5b7807a4-f406-498f-b7c4-a1bbe8ab5957","Type":"ContainerStarted","Data":"1f1eb7b85415728a4ed2a540ebe25c55a7961321e356d70caa4a98d0598a5178"} Jan 23 13:43:36 crc kubenswrapper[4771]: I0123 13:43:36.841990 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerID="7a0935f43147cff352673e04e4be18217be13bfff3dfbb0d6b92eae30427dc94" exitCode=0 Jan 23 13:43:36 crc kubenswrapper[4771]: I0123 13:43:36.842041 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" event={"ID":"5b7807a4-f406-498f-b7c4-a1bbe8ab5957","Type":"ContainerDied","Data":"7a0935f43147cff352673e04e4be18217be13bfff3dfbb0d6b92eae30427dc94"} Jan 23 13:43:38 crc kubenswrapper[4771]: I0123 13:43:38.855645 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerID="c832fec0cf290026ebfe1d83b57e926c1fc21e2f5a91ce7f6874edb04e0a3d4b" exitCode=0 Jan 23 13:43:38 crc kubenswrapper[4771]: I0123 13:43:38.855750 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" event={"ID":"5b7807a4-f406-498f-b7c4-a1bbe8ab5957","Type":"ContainerDied","Data":"c832fec0cf290026ebfe1d83b57e926c1fc21e2f5a91ce7f6874edb04e0a3d4b"} Jan 23 13:43:39 crc kubenswrapper[4771]: I0123 13:43:39.866567 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerID="b9a30d80c466cc476044b3b2843d9254a4797fe9369522df79afd821e282f7c9" exitCode=0 Jan 23 13:43:39 crc kubenswrapper[4771]: I0123 13:43:39.866688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" event={"ID":"5b7807a4-f406-498f-b7c4-a1bbe8ab5957","Type":"ContainerDied","Data":"b9a30d80c466cc476044b3b2843d9254a4797fe9369522df79afd821e282f7c9"} Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.173712 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.277186 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcq2h\" (UniqueName: \"kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h\") pod \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.277311 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util\") pod \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.277354 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle\") pod \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\" (UID: \"5b7807a4-f406-498f-b7c4-a1bbe8ab5957\") " Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.277966 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle" (OuterVolumeSpecName: "bundle") pod "5b7807a4-f406-498f-b7c4-a1bbe8ab5957" (UID: "5b7807a4-f406-498f-b7c4-a1bbe8ab5957"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.284771 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h" (OuterVolumeSpecName: "kube-api-access-rcq2h") pod "5b7807a4-f406-498f-b7c4-a1bbe8ab5957" (UID: "5b7807a4-f406-498f-b7c4-a1bbe8ab5957"). InnerVolumeSpecName "kube-api-access-rcq2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.291366 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util" (OuterVolumeSpecName: "util") pod "5b7807a4-f406-498f-b7c4-a1bbe8ab5957" (UID: "5b7807a4-f406-498f-b7c4-a1bbe8ab5957"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.379355 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcq2h\" (UniqueName: \"kubernetes.io/projected/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-kube-api-access-rcq2h\") on node \"crc\" DevicePath \"\"" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.379394 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-util\") on node \"crc\" DevicePath \"\"" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.379440 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5b7807a4-f406-498f-b7c4-a1bbe8ab5957-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.880227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" event={"ID":"5b7807a4-f406-498f-b7c4-a1bbe8ab5957","Type":"ContainerDied","Data":"1f1eb7b85415728a4ed2a540ebe25c55a7961321e356d70caa4a98d0598a5178"} Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.880275 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj" Jan 23 13:43:41 crc kubenswrapper[4771]: I0123 13:43:41.880281 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1eb7b85415728a4ed2a540ebe25c55a7961321e356d70caa4a98d0598a5178" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.447420 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cbcpg"] Jan 23 13:43:46 crc kubenswrapper[4771]: E0123 13:43:46.448109 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="util" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.448128 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="util" Jan 23 13:43:46 crc kubenswrapper[4771]: E0123 13:43:46.448142 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="extract" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.448150 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="extract" Jan 23 13:43:46 crc kubenswrapper[4771]: E0123 13:43:46.448177 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="pull" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.448186 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="pull" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.448322 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7807a4-f406-498f-b7c4-a1bbe8ab5957" containerName="extract" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.448916 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.451970 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.452233 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-n9bks" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.452668 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.463204 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cbcpg"] Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.574606 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6mg\" (UniqueName: \"kubernetes.io/projected/c67bd392-c996-44eb-af78-1822d2b08b16-kube-api-access-kd6mg\") pod \"nmstate-operator-646758c888-cbcpg\" (UID: \"c67bd392-c996-44eb-af78-1822d2b08b16\") " pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.675787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd6mg\" (UniqueName: \"kubernetes.io/projected/c67bd392-c996-44eb-af78-1822d2b08b16-kube-api-access-kd6mg\") pod \"nmstate-operator-646758c888-cbcpg\" (UID: \"c67bd392-c996-44eb-af78-1822d2b08b16\") " pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.702562 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd6mg\" (UniqueName: \"kubernetes.io/projected/c67bd392-c996-44eb-af78-1822d2b08b16-kube-api-access-kd6mg\") pod \"nmstate-operator-646758c888-cbcpg\" (UID: \"c67bd392-c996-44eb-af78-1822d2b08b16\") " pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" Jan 23 13:43:46 crc kubenswrapper[4771]: I0123 13:43:46.767588 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" Jan 23 13:43:47 crc kubenswrapper[4771]: I0123 13:43:47.163586 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-cbcpg"] Jan 23 13:43:47 crc kubenswrapper[4771]: I0123 13:43:47.920486 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" event={"ID":"c67bd392-c996-44eb-af78-1822d2b08b16","Type":"ContainerStarted","Data":"c114730fd7eb891efb1ef2c0fd984045f3c644a60ad7bda18e5bd7837f6e7135"} Jan 23 13:43:49 crc kubenswrapper[4771]: I0123 13:43:49.933869 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" event={"ID":"c67bd392-c996-44eb-af78-1822d2b08b16","Type":"ContainerStarted","Data":"68a18a5c0e7db4894cb5a9aa88a32e899b5f0ed9dbd10a6c3340979d5b49b07e"} Jan 23 13:43:49 crc kubenswrapper[4771]: I0123 13:43:49.952038 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-cbcpg" podStartSLOduration=1.620031684 podStartE2EDuration="3.95201489s" podCreationTimestamp="2026-01-23 13:43:46 +0000 UTC" firstStartedPulling="2026-01-23 13:43:47.175315481 +0000 UTC m=+668.197853106" lastFinishedPulling="2026-01-23 13:43:49.507298687 +0000 UTC m=+670.529836312" observedRunningTime="2026-01-23 13:43:49.947844404 +0000 UTC m=+670.970382039" watchObservedRunningTime="2026-01-23 13:43:49.95201489 +0000 UTC m=+670.974552515" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.249542 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ggk4b"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.251575 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.253919 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dq8jk" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.261851 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.262702 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.267640 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.288861 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ggk4b"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.296319 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.316696 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q45wt"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.320793 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.399471 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.400273 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.404860 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.405608 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-kcqbj" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.416872 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.417083 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.420057 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxzq\" (UniqueName: \"kubernetes.io/projected/285b790a-ad5d-4f7a-aba6-ffc18d69d449-kube-api-access-xfxzq\") pod \"nmstate-metrics-54757c584b-ggk4b\" (UID: \"285b790a-ad5d-4f7a-aba6-ffc18d69d449\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.423629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.423784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294rd\" (UniqueName: \"kubernetes.io/projected/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-kube-api-access-294rd\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.524954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7lp\" (UniqueName: \"kubernetes.io/projected/34806cb5-67fe-4e5e-a50d-3993df18ceef-kube-api-access-tf7lp\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525069 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfxzq\" (UniqueName: \"kubernetes.io/projected/285b790a-ad5d-4f7a-aba6-ffc18d69d449-kube-api-access-xfxzq\") pod \"nmstate-metrics-54757c584b-ggk4b\" (UID: \"285b790a-ad5d-4f7a-aba6-ffc18d69d449\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525109 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-nmstate-lock\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525228 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vdcs\" (UniqueName: \"kubernetes.io/projected/0f261a76-9325-4a93-b553-7937534cc5a9-kube-api-access-7vdcs\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-dbus-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525331 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-ovs-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525545 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34806cb5-67fe-4e5e-a50d-3993df18ceef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525625 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294rd\" (UniqueName: \"kubernetes.io/projected/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-kube-api-access-294rd\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.525651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34806cb5-67fe-4e5e-a50d-3993df18ceef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.534976 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.543946 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfxzq\" (UniqueName: \"kubernetes.io/projected/285b790a-ad5d-4f7a-aba6-ffc18d69d449-kube-api-access-xfxzq\") pod \"nmstate-metrics-54757c584b-ggk4b\" (UID: \"285b790a-ad5d-4f7a-aba6-ffc18d69d449\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.548219 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294rd\" (UniqueName: \"kubernetes.io/projected/02c5a515-61c6-46ae-ba60-5c1c04e7bcfd-kube-api-access-294rd\") pod \"nmstate-webhook-8474b5b9d8-hnhr7\" (UID: \"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.577179 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.592122 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.592739 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-679698f9fc-dn5q8"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.593609 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.612081 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679698f9fc-dn5q8"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631213 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-oauth-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631281 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-oauth-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631352 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-nmstate-lock\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631377 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmpr\" (UniqueName: \"kubernetes.io/projected/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-kube-api-access-stmpr\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631403 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631450 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vdcs\" (UniqueName: \"kubernetes.io/projected/0f261a76-9325-4a93-b553-7937534cc5a9-kube-api-access-7vdcs\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631471 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-dbus-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-trusted-ca-bundle\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-ovs-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34806cb5-67fe-4e5e-a50d-3993df18ceef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631626 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631647 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34806cb5-67fe-4e5e-a50d-3993df18ceef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631670 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-service-ca\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.631696 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7lp\" (UniqueName: \"kubernetes.io/projected/34806cb5-67fe-4e5e-a50d-3993df18ceef-kube-api-access-tf7lp\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.632094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-nmstate-lock\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.632108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-ovs-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.632592 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/0f261a76-9325-4a93-b553-7937534cc5a9-dbus-socket\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.633447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/34806cb5-67fe-4e5e-a50d-3993df18ceef-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.646431 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/34806cb5-67fe-4e5e-a50d-3993df18ceef-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.665386 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7lp\" (UniqueName: \"kubernetes.io/projected/34806cb5-67fe-4e5e-a50d-3993df18ceef-kube-api-access-tf7lp\") pod \"nmstate-console-plugin-7754f76f8b-5zrsc\" (UID: \"34806cb5-67fe-4e5e-a50d-3993df18ceef\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.667117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vdcs\" (UniqueName: \"kubernetes.io/projected/0f261a76-9325-4a93-b553-7937534cc5a9-kube-api-access-7vdcs\") pod \"nmstate-handler-q45wt\" (UID: \"0f261a76-9325-4a93-b553-7937534cc5a9\") " pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.725450 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.732575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.732960 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-service-ca\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.732987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-oauth-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.733014 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-oauth-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.733037 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stmpr\" (UniqueName: \"kubernetes.io/projected/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-kube-api-access-stmpr\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.733054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.733095 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-trusted-ca-bundle\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.734266 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-trusted-ca-bundle\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.737993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.738494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.739595 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-oauth-serving-cert\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.739654 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-console-oauth-config\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.740097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-service-ca\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.757912 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmpr\" (UniqueName: \"kubernetes.io/projected/db44e4c0-a5d2-492e-a3fe-76c7abce3f70-kube-api-access-stmpr\") pod \"console-679698f9fc-dn5q8\" (UID: \"db44e4c0-a5d2-492e-a3fe-76c7abce3f70\") " pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.919775 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ggk4b"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.936512 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7"] Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.948150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.983469 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:43:56 crc kubenswrapper[4771]: I0123 13:43:56.987641 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" event={"ID":"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd","Type":"ContainerStarted","Data":"2ede890ce633b102991b075cd162a61851a3bcb4acab6ccb9b670b9ce9a0603f"} Jan 23 13:43:57 crc kubenswrapper[4771]: I0123 13:43:57.003549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" event={"ID":"285b790a-ad5d-4f7a-aba6-ffc18d69d449","Type":"ContainerStarted","Data":"1be7c018bd1b2bd30a1a8e9725b87ce3eead48d4b998f3b9c5d21ef4ddc8028b"} Jan 23 13:43:57 crc kubenswrapper[4771]: W0123 13:43:57.015455 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f261a76_9325_4a93_b553_7937534cc5a9.slice/crio-db63eb28fba4894840b6bb38e6a2a7b515e64e31452f3a070d7deafdf066151d WatchSource:0}: Error finding container db63eb28fba4894840b6bb38e6a2a7b515e64e31452f3a070d7deafdf066151d: Status 404 returned error can't find the container with id db63eb28fba4894840b6bb38e6a2a7b515e64e31452f3a070d7deafdf066151d Jan 23 13:43:57 crc kubenswrapper[4771]: I0123 13:43:57.035841 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc"] Jan 23 13:43:57 crc kubenswrapper[4771]: I0123 13:43:57.225770 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-679698f9fc-dn5q8"] Jan 23 13:43:58 crc kubenswrapper[4771]: I0123 13:43:58.915185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" event={"ID":"34806cb5-67fe-4e5e-a50d-3993df18ceef","Type":"ContainerStarted","Data":"814e1a5aab08813e8f5c65e9c25351890872f573119f334d63f10303d6c432f9"} Jan 23 13:43:58 crc kubenswrapper[4771]: I0123 13:43:58.918138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679698f9fc-dn5q8" event={"ID":"db44e4c0-a5d2-492e-a3fe-76c7abce3f70","Type":"ContainerStarted","Data":"646d87a4c727eca2debff42cd1fd6a089e5f1e23a40130d2fa33025e9b43f112"} Jan 23 13:43:58 crc kubenswrapper[4771]: I0123 13:43:58.918163 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-679698f9fc-dn5q8" event={"ID":"db44e4c0-a5d2-492e-a3fe-76c7abce3f70","Type":"ContainerStarted","Data":"622fd8b7e3a0d6f9b1fd8807640ae7cb00114da601d165c27ae9b264bd588423"} Jan 23 13:43:58 crc kubenswrapper[4771]: I0123 13:43:58.922497 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q45wt" event={"ID":"0f261a76-9325-4a93-b553-7937534cc5a9","Type":"ContainerStarted","Data":"db63eb28fba4894840b6bb38e6a2a7b515e64e31452f3a070d7deafdf066151d"} Jan 23 13:43:58 crc kubenswrapper[4771]: I0123 13:43:58.945403 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-679698f9fc-dn5q8" podStartSLOduration=2.9453824600000003 podStartE2EDuration="2.94538246s" podCreationTimestamp="2026-01-23 13:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:43:58.941557827 +0000 UTC m=+679.964095482" watchObservedRunningTime="2026-01-23 13:43:58.94538246 +0000 UTC m=+679.967920095" Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.958380 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" event={"ID":"34806cb5-67fe-4e5e-a50d-3993df18ceef","Type":"ContainerStarted","Data":"6b266b451e7df72e4128ebad293c842feea235fe77ca5fd907a78f4773c4c18d"} Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.960657 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" event={"ID":"02c5a515-61c6-46ae-ba60-5c1c04e7bcfd","Type":"ContainerStarted","Data":"519a06d4b83ec28a553bfc273a129187bea74818e4f1a0edf4f8070991e55de3"} Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.960712 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.962974 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q45wt" event={"ID":"0f261a76-9325-4a93-b553-7937534cc5a9","Type":"ContainerStarted","Data":"60dc8ebecc2659b9bd374df794fad95ec07d6e6087620c878ed85e38e8076bdf"} Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.963027 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.964693 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" event={"ID":"285b790a-ad5d-4f7a-aba6-ffc18d69d449","Type":"ContainerStarted","Data":"a1b90942b6344457eec04d8b6309b36438e96a7ed6a7f543bffd3f20d993f34b"} Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.979099 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-5zrsc" podStartSLOduration=2.206531831 podStartE2EDuration="7.979073117s" podCreationTimestamp="2026-01-23 13:43:56 +0000 UTC" firstStartedPulling="2026-01-23 13:43:57.063004229 +0000 UTC m=+678.085541854" lastFinishedPulling="2026-01-23 13:44:02.835545495 +0000 UTC m=+683.858083140" observedRunningTime="2026-01-23 13:44:03.974647443 +0000 UTC m=+684.997185088" watchObservedRunningTime="2026-01-23 13:44:03.979073117 +0000 UTC m=+685.001610742" Jan 23 13:44:03 crc kubenswrapper[4771]: I0123 13:44:03.998736 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" podStartSLOduration=2.117655539 podStartE2EDuration="7.998709975s" podCreationTimestamp="2026-01-23 13:43:56 +0000 UTC" firstStartedPulling="2026-01-23 13:43:56.972616998 +0000 UTC m=+677.995154623" lastFinishedPulling="2026-01-23 13:44:02.853671434 +0000 UTC m=+683.876209059" observedRunningTime="2026-01-23 13:44:03.998068385 +0000 UTC m=+685.020606040" watchObservedRunningTime="2026-01-23 13:44:03.998709975 +0000 UTC m=+685.021247610" Jan 23 13:44:04 crc kubenswrapper[4771]: I0123 13:44:04.021402 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q45wt" podStartSLOduration=2.215230874 podStartE2EDuration="8.021378143s" podCreationTimestamp="2026-01-23 13:43:56 +0000 UTC" firstStartedPulling="2026-01-23 13:43:57.025825149 +0000 UTC m=+678.048362774" lastFinishedPulling="2026-01-23 13:44:02.831972418 +0000 UTC m=+683.854510043" observedRunningTime="2026-01-23 13:44:04.01728032 +0000 UTC m=+685.039817965" watchObservedRunningTime="2026-01-23 13:44:04.021378143 +0000 UTC m=+685.043915768" Jan 23 13:44:05 crc kubenswrapper[4771]: I0123 13:44:05.998493 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" event={"ID":"285b790a-ad5d-4f7a-aba6-ffc18d69d449","Type":"ContainerStarted","Data":"c4eb4c7b373766ae21816fd91fa45bdb36f6de01f9373d422b9f74a614911979"} Jan 23 13:44:06 crc kubenswrapper[4771]: I0123 13:44:06.019892 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ggk4b" podStartSLOduration=1.520837569 podStartE2EDuration="10.019864891s" podCreationTimestamp="2026-01-23 13:43:56 +0000 UTC" firstStartedPulling="2026-01-23 13:43:56.947079378 +0000 UTC m=+677.969617003" lastFinishedPulling="2026-01-23 13:44:05.44610669 +0000 UTC m=+686.468644325" observedRunningTime="2026-01-23 13:44:06.019614024 +0000 UTC m=+687.042151659" watchObservedRunningTime="2026-01-23 13:44:06.019864891 +0000 UTC m=+687.042402536" Jan 23 13:44:06 crc kubenswrapper[4771]: I0123 13:44:06.984295 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:44:06 crc kubenswrapper[4771]: I0123 13:44:06.984662 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:44:06 crc kubenswrapper[4771]: I0123 13:44:06.989322 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:44:07 crc kubenswrapper[4771]: I0123 13:44:07.011441 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-679698f9fc-dn5q8" Jan 23 13:44:07 crc kubenswrapper[4771]: I0123 13:44:07.072924 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:44:11 crc kubenswrapper[4771]: I0123 13:44:11.978954 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q45wt" Jan 23 13:44:16 crc kubenswrapper[4771]: I0123 13:44:16.598334 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-hnhr7" Jan 23 13:44:30 crc kubenswrapper[4771]: I0123 13:44:30.312057 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:44:30 crc kubenswrapper[4771]: I0123 13:44:30.312706 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.139297 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-84f77" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerName="console" containerID="cri-o://4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5" gracePeriod=15 Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.492041 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-84f77_6c1e299b-6a89-4d9c-87ff-e2937d66487d/console/0.log" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.492366 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667213 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667304 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667447 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667512 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7gnt\" (UniqueName: \"kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.667538 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca\") pod \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\" (UID: \"6c1e299b-6a89-4d9c-87ff-e2937d66487d\") " Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.668468 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config" (OuterVolumeSpecName: "console-config") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.668505 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca" (OuterVolumeSpecName: "service-ca") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.668552 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.668770 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.677121 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.680137 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.684043 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt" (OuterVolumeSpecName: "kube-api-access-d7gnt") pod "6c1e299b-6a89-4d9c-87ff-e2937d66487d" (UID: "6c1e299b-6a89-4d9c-87ff-e2937d66487d"). InnerVolumeSpecName "kube-api-access-d7gnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769171 4771 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769218 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7gnt\" (UniqueName: \"kubernetes.io/projected/6c1e299b-6a89-4d9c-87ff-e2937d66487d-kube-api-access-d7gnt\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769233 4771 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769244 4771 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769257 4771 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769268 4771 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.769279 4771 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c1e299b-6a89-4d9c-87ff-e2937d66487d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.978796 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m"] Jan 23 13:44:32 crc kubenswrapper[4771]: E0123 13:44:32.979073 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerName="console" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.979084 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerName="console" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.979194 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerName="console" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.980244 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.983212 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 13:44:32 crc kubenswrapper[4771]: I0123 13:44:32.986871 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m"] Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.073241 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.073317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.073463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxkk\" (UniqueName: \"kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.175341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrxkk\" (UniqueName: \"kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.175443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.175489 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.176052 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.176427 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.183844 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-84f77_6c1e299b-6a89-4d9c-87ff-e2937d66487d/console/0.log" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.183898 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" containerID="4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5" exitCode=2 Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.183935 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-84f77" event={"ID":"6c1e299b-6a89-4d9c-87ff-e2937d66487d","Type":"ContainerDied","Data":"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5"} Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.183981 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-84f77" event={"ID":"6c1e299b-6a89-4d9c-87ff-e2937d66487d","Type":"ContainerDied","Data":"aeca68a8ccb77467525b55fc39d2a4667bbc683e12b0c1ee0b3629d88e52323f"} Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.183997 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-84f77" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.184007 4771 scope.go:117] "RemoveContainer" containerID="4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.194251 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrxkk\" (UniqueName: \"kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.206999 4771 scope.go:117] "RemoveContainer" containerID="4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5" Jan 23 13:44:33 crc kubenswrapper[4771]: E0123 13:44:33.207716 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5\": container with ID starting with 4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5 not found: ID does not exist" containerID="4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.207792 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5"} err="failed to get container status \"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5\": rpc error: code = NotFound desc = could not find container \"4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5\": container with ID starting with 4769c0a591a8f37fe0dd01a0b656addcfe0c2565508e000f2a5873f5604f89a5 not found: ID does not exist" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.213791 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.219493 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-84f77"] Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.237382 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1e299b-6a89-4d9c-87ff-e2937d66487d" path="/var/lib/kubelet/pods/6c1e299b-6a89-4d9c-87ff-e2937d66487d/volumes" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.296617 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:33 crc kubenswrapper[4771]: I0123 13:44:33.715650 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m"] Jan 23 13:44:33 crc kubenswrapper[4771]: W0123 13:44:33.725716 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9578571d_8113_4e8e_b829_4c7283c4fbf1.slice/crio-53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d WatchSource:0}: Error finding container 53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d: Status 404 returned error can't find the container with id 53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d Jan 23 13:44:34 crc kubenswrapper[4771]: I0123 13:44:34.192513 4771 generic.go:334] "Generic (PLEG): container finished" podID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerID="636739a86d53ebd9772fe66cacfab0eb9ad886f375dfcbbcd3b1b7471fb1c5d4" exitCode=0 Jan 23 13:44:34 crc kubenswrapper[4771]: I0123 13:44:34.192590 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" event={"ID":"9578571d-8113-4e8e-b829-4c7283c4fbf1","Type":"ContainerDied","Data":"636739a86d53ebd9772fe66cacfab0eb9ad886f375dfcbbcd3b1b7471fb1c5d4"} Jan 23 13:44:34 crc kubenswrapper[4771]: I0123 13:44:34.193826 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" event={"ID":"9578571d-8113-4e8e-b829-4c7283c4fbf1","Type":"ContainerStarted","Data":"53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d"} Jan 23 13:44:36 crc kubenswrapper[4771]: I0123 13:44:36.208790 4771 generic.go:334] "Generic (PLEG): container finished" podID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerID="cab2b2ca5f93a9f5ba65364fd2567d671ba4b0fa82bba2f725f996f04d32cd6a" exitCode=0 Jan 23 13:44:36 crc kubenswrapper[4771]: I0123 13:44:36.209141 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" event={"ID":"9578571d-8113-4e8e-b829-4c7283c4fbf1","Type":"ContainerDied","Data":"cab2b2ca5f93a9f5ba65364fd2567d671ba4b0fa82bba2f725f996f04d32cd6a"} Jan 23 13:44:37 crc kubenswrapper[4771]: I0123 13:44:37.220525 4771 generic.go:334] "Generic (PLEG): container finished" podID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerID="abd2934c4be89cda853b3e0618f75192c218c341020b9bc172f2d4d9bbf37f8b" exitCode=0 Jan 23 13:44:37 crc kubenswrapper[4771]: I0123 13:44:37.220598 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" event={"ID":"9578571d-8113-4e8e-b829-4c7283c4fbf1","Type":"ContainerDied","Data":"abd2934c4be89cda853b3e0618f75192c218c341020b9bc172f2d4d9bbf37f8b"} Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.495130 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.658332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle\") pod \"9578571d-8113-4e8e-b829-4c7283c4fbf1\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.658598 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util\") pod \"9578571d-8113-4e8e-b829-4c7283c4fbf1\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.658655 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrxkk\" (UniqueName: \"kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk\") pod \"9578571d-8113-4e8e-b829-4c7283c4fbf1\" (UID: \"9578571d-8113-4e8e-b829-4c7283c4fbf1\") " Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.660383 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle" (OuterVolumeSpecName: "bundle") pod "9578571d-8113-4e8e-b829-4c7283c4fbf1" (UID: "9578571d-8113-4e8e-b829-4c7283c4fbf1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.665625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk" (OuterVolumeSpecName: "kube-api-access-nrxkk") pod "9578571d-8113-4e8e-b829-4c7283c4fbf1" (UID: "9578571d-8113-4e8e-b829-4c7283c4fbf1"). InnerVolumeSpecName "kube-api-access-nrxkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.689940 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util" (OuterVolumeSpecName: "util") pod "9578571d-8113-4e8e-b829-4c7283c4fbf1" (UID: "9578571d-8113-4e8e-b829-4c7283c4fbf1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.760180 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-util\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.760218 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrxkk\" (UniqueName: \"kubernetes.io/projected/9578571d-8113-4e8e-b829-4c7283c4fbf1-kube-api-access-nrxkk\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:38 crc kubenswrapper[4771]: I0123 13:44:38.760232 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9578571d-8113-4e8e-b829-4c7283c4fbf1-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:44:39 crc kubenswrapper[4771]: I0123 13:44:39.240629 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" event={"ID":"9578571d-8113-4e8e-b829-4c7283c4fbf1","Type":"ContainerDied","Data":"53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d"} Jan 23 13:44:39 crc kubenswrapper[4771]: I0123 13:44:39.240674 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53775bf91c8a34c2375c63381ba730994ec19355700b022f7947bb44c389e99d" Jan 23 13:44:39 crc kubenswrapper[4771]: I0123 13:44:39.240696 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.491556 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q"] Jan 23 13:44:48 crc kubenswrapper[4771]: E0123 13:44:48.492372 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="pull" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.492384 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="pull" Jan 23 13:44:48 crc kubenswrapper[4771]: E0123 13:44:48.492402 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="util" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.492422 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="util" Jan 23 13:44:48 crc kubenswrapper[4771]: E0123 13:44:48.492440 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="extract" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.492447 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="extract" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.492562 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="9578571d-8113-4e8e-b829-4c7283c4fbf1" containerName="extract" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.493160 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.495581 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.495822 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.496192 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.496865 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.496949 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-99mkk" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.509764 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q"] Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.598701 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vn5n\" (UniqueName: \"kubernetes.io/projected/a8d73a95-8330-4f66-ab99-024cec0be447-kube-api-access-6vn5n\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.598760 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-apiservice-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.598859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-webhook-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.700468 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vn5n\" (UniqueName: \"kubernetes.io/projected/a8d73a95-8330-4f66-ab99-024cec0be447-kube-api-access-6vn5n\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.700549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-apiservice-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.700642 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-webhook-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.711276 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-webhook-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.711754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a8d73a95-8330-4f66-ab99-024cec0be447-apiservice-cert\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.724077 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vn5n\" (UniqueName: \"kubernetes.io/projected/a8d73a95-8330-4f66-ab99-024cec0be447-kube-api-access-6vn5n\") pod \"metallb-operator-controller-manager-5f8bd5d6b5-dd58q\" (UID: \"a8d73a95-8330-4f66-ab99-024cec0be447\") " pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.814510 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.941585 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx"] Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.943043 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.950137 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.950385 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rmkk5" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.950463 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 13:44:48 crc kubenswrapper[4771]: I0123 13:44:48.970774 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx"] Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.105937 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-webhook-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.105989 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84pv2\" (UniqueName: \"kubernetes.io/projected/a6fc3f94-e22a-4d26-a74c-105d3b51173b-kube-api-access-84pv2\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.106063 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.207048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84pv2\" (UniqueName: \"kubernetes.io/projected/a6fc3f94-e22a-4d26-a74c-105d3b51173b-kube-api-access-84pv2\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.207168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.207206 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-webhook-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.213888 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-webhook-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.225810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6fc3f94-e22a-4d26-a74c-105d3b51173b-apiservice-cert\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.235221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84pv2\" (UniqueName: \"kubernetes.io/projected/a6fc3f94-e22a-4d26-a74c-105d3b51173b-kube-api-access-84pv2\") pod \"metallb-operator-webhook-server-5f89cf578d-zwzwx\" (UID: \"a6fc3f94-e22a-4d26-a74c-105d3b51173b\") " pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.284347 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.350837 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q"] Jan 23 13:44:49 crc kubenswrapper[4771]: W0123 13:44:49.361716 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d73a95_8330_4f66_ab99_024cec0be447.slice/crio-09f2332479cbeead4b90f215fa755fd5673cbe8a3875b0aa91b05d8ef86c21eb WatchSource:0}: Error finding container 09f2332479cbeead4b90f215fa755fd5673cbe8a3875b0aa91b05d8ef86c21eb: Status 404 returned error can't find the container with id 09f2332479cbeead4b90f215fa755fd5673cbe8a3875b0aa91b05d8ef86c21eb Jan 23 13:44:49 crc kubenswrapper[4771]: I0123 13:44:49.540287 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx"] Jan 23 13:44:49 crc kubenswrapper[4771]: W0123 13:44:49.548826 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6fc3f94_e22a_4d26_a74c_105d3b51173b.slice/crio-4ae8d656b9c7efc4860166c0dfadda3d6601eaa07549afb8340d6f1e8356db72 WatchSource:0}: Error finding container 4ae8d656b9c7efc4860166c0dfadda3d6601eaa07549afb8340d6f1e8356db72: Status 404 returned error can't find the container with id 4ae8d656b9c7efc4860166c0dfadda3d6601eaa07549afb8340d6f1e8356db72 Jan 23 13:44:50 crc kubenswrapper[4771]: I0123 13:44:50.313040 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" event={"ID":"a6fc3f94-e22a-4d26-a74c-105d3b51173b","Type":"ContainerStarted","Data":"4ae8d656b9c7efc4860166c0dfadda3d6601eaa07549afb8340d6f1e8356db72"} Jan 23 13:44:50 crc kubenswrapper[4771]: I0123 13:44:50.314768 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" event={"ID":"a8d73a95-8330-4f66-ab99-024cec0be447","Type":"ContainerStarted","Data":"09f2332479cbeead4b90f215fa755fd5673cbe8a3875b0aa91b05d8ef86c21eb"} Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.350827 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" event={"ID":"a6fc3f94-e22a-4d26-a74c-105d3b51173b","Type":"ContainerStarted","Data":"aecbb4e7b50f2c61c3fc8ff6bf3cc96d983a524838fa20af676d3569adac4f46"} Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.351570 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.353026 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" event={"ID":"a8d73a95-8330-4f66-ab99-024cec0be447","Type":"ContainerStarted","Data":"5cf19048f8d5d1db501e0d068a4918761a9cf39012d578ec7335c9a1fadb67e7"} Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.353203 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.374553 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" podStartSLOduration=2.08097129 podStartE2EDuration="7.374535563s" podCreationTimestamp="2026-01-23 13:44:48 +0000 UTC" firstStartedPulling="2026-01-23 13:44:49.552309992 +0000 UTC m=+730.574847617" lastFinishedPulling="2026-01-23 13:44:54.845874265 +0000 UTC m=+735.868411890" observedRunningTime="2026-01-23 13:44:55.369979176 +0000 UTC m=+736.392516801" watchObservedRunningTime="2026-01-23 13:44:55.374535563 +0000 UTC m=+736.397073188" Jan 23 13:44:55 crc kubenswrapper[4771]: I0123 13:44:55.388921 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" podStartSLOduration=1.934650205 podStartE2EDuration="7.388899628s" podCreationTimestamp="2026-01-23 13:44:48 +0000 UTC" firstStartedPulling="2026-01-23 13:44:49.374608177 +0000 UTC m=+730.397145802" lastFinishedPulling="2026-01-23 13:44:54.8288576 +0000 UTC m=+735.851395225" observedRunningTime="2026-01-23 13:44:55.385048581 +0000 UTC m=+736.407586226" watchObservedRunningTime="2026-01-23 13:44:55.388899628 +0000 UTC m=+736.411437263" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.141692 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw"] Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.142919 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.144765 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.144873 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.153980 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw"] Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.273294 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.273429 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.273483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nncdp\" (UniqueName: \"kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.312539 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.312631 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.375352 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.375443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nncdp\" (UniqueName: \"kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.375524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.376677 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.398963 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.405215 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nncdp\" (UniqueName: \"kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp\") pod \"collect-profiles-29486265-4qtnw\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.466485 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:00 crc kubenswrapper[4771]: I0123 13:45:00.941744 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw"] Jan 23 13:45:01 crc kubenswrapper[4771]: I0123 13:45:01.425048 4771 generic.go:334] "Generic (PLEG): container finished" podID="b22cafb9-93b9-4c74-878a-3b61fc86aa40" containerID="2cd19e2a64b39e22b1ae456f4310d91119158d955dd61e86d3e5582556b5e080" exitCode=0 Jan 23 13:45:01 crc kubenswrapper[4771]: I0123 13:45:01.425116 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" event={"ID":"b22cafb9-93b9-4c74-878a-3b61fc86aa40","Type":"ContainerDied","Data":"2cd19e2a64b39e22b1ae456f4310d91119158d955dd61e86d3e5582556b5e080"} Jan 23 13:45:01 crc kubenswrapper[4771]: I0123 13:45:01.425500 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" event={"ID":"b22cafb9-93b9-4c74-878a-3b61fc86aa40","Type":"ContainerStarted","Data":"df526ca9377736bd10b26f0616154fc03b99fb2934af9c369e69720ca0014900"} Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.655611 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.819289 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume\") pod \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.819836 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume\") pod \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.819900 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nncdp\" (UniqueName: \"kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp\") pod \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\" (UID: \"b22cafb9-93b9-4c74-878a-3b61fc86aa40\") " Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.821591 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume" (OuterVolumeSpecName: "config-volume") pod "b22cafb9-93b9-4c74-878a-3b61fc86aa40" (UID: "b22cafb9-93b9-4c74-878a-3b61fc86aa40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.826244 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b22cafb9-93b9-4c74-878a-3b61fc86aa40" (UID: "b22cafb9-93b9-4c74-878a-3b61fc86aa40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.843650 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp" (OuterVolumeSpecName: "kube-api-access-nncdp") pod "b22cafb9-93b9-4c74-878a-3b61fc86aa40" (UID: "b22cafb9-93b9-4c74-878a-3b61fc86aa40"). InnerVolumeSpecName "kube-api-access-nncdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.922013 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nncdp\" (UniqueName: \"kubernetes.io/projected/b22cafb9-93b9-4c74-878a-3b61fc86aa40-kube-api-access-nncdp\") on node \"crc\" DevicePath \"\"" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.922058 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b22cafb9-93b9-4c74-878a-3b61fc86aa40-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:45:02 crc kubenswrapper[4771]: I0123 13:45:02.922070 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22cafb9-93b9-4c74-878a-3b61fc86aa40-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:45:03 crc kubenswrapper[4771]: I0123 13:45:03.438614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" event={"ID":"b22cafb9-93b9-4c74-878a-3b61fc86aa40","Type":"ContainerDied","Data":"df526ca9377736bd10b26f0616154fc03b99fb2934af9c369e69720ca0014900"} Jan 23 13:45:03 crc kubenswrapper[4771]: I0123 13:45:03.438670 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df526ca9377736bd10b26f0616154fc03b99fb2934af9c369e69720ca0014900" Jan 23 13:45:03 crc kubenswrapper[4771]: I0123 13:45:03.438735 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw" Jan 23 13:45:09 crc kubenswrapper[4771]: I0123 13:45:09.288913 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f89cf578d-zwzwx" Jan 23 13:45:19 crc kubenswrapper[4771]: I0123 13:45:19.938157 4771 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 13:45:28 crc kubenswrapper[4771]: I0123 13:45:28.818242 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5f8bd5d6b5-dd58q" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.633880 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kglff"] Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.634251 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22cafb9-93b9-4c74-878a-3b61fc86aa40" containerName="collect-profiles" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.634276 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22cafb9-93b9-4c74-878a-3b61fc86aa40" containerName="collect-profiles" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.634454 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22cafb9-93b9-4c74-878a-3b61fc86aa40" containerName="collect-profiles" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.637194 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.639927 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2l8q9" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.640210 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.640235 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.654829 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65"] Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.656855 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.659799 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.668534 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65"] Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-startup\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711453 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxtr\" (UniqueName: \"kubernetes.io/projected/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-kube-api-access-wbxtr\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-reloader\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-conf\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711615 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711645 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-sockets\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb57\" (UniqueName: \"kubernetes.io/projected/741dbcde-6dfb-4b44-89fb-f020af39320d-kube-api-access-scb57\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711754 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.711787 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics-certs\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.763816 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jx8m8"] Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.764787 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-ldmbx"] Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.765599 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.766116 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.771705 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.772150 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.772355 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.772900 4771 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-px9lt" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.774314 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.782345 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-ldmbx"] Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813338 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics-certs\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzqk\" (UniqueName: \"kubernetes.io/projected/1b312eb2-b256-4161-8f3c-dce680d9dbfc-kube-api-access-hgzqk\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813546 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-startup\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbxtr\" (UniqueName: \"kubernetes.io/projected/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-kube-api-access-wbxtr\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-reloader\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813850 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-cert\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.813924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c726\" (UniqueName: \"kubernetes.io/projected/df96f6c9-141e-453a-a114-573d7604e8a7-kube-api-access-2c726\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df96f6c9-141e-453a-a114-573d7604e8a7-metallb-excludel2\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-metrics-certs\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-conf\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814563 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-sockets\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb57\" (UniqueName: \"kubernetes.io/projected/741dbcde-6dfb-4b44-89fb-f020af39320d-kube-api-access-scb57\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.815110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-reloader\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.815509 4771 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.815553 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert podName:ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2 nodeName:}" failed. No retries permitted until 2026-01-23 13:45:30.315540068 +0000 UTC m=+771.338077693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert") pod "frr-k8s-webhook-server-7df86c4f6c-wgw65" (UID: "ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2") : secret "frr-k8s-webhook-server-cert" not found Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.814405 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.815685 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-conf\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.815813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-startup\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.816297 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/741dbcde-6dfb-4b44-89fb-f020af39320d-frr-sockets\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.822058 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/741dbcde-6dfb-4b44-89fb-f020af39320d-metrics-certs\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.835264 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb57\" (UniqueName: \"kubernetes.io/projected/741dbcde-6dfb-4b44-89fb-f020af39320d-kube-api-access-scb57\") pod \"frr-k8s-kglff\" (UID: \"741dbcde-6dfb-4b44-89fb-f020af39320d\") " pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.835371 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbxtr\" (UniqueName: \"kubernetes.io/projected/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-kube-api-access-wbxtr\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-cert\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921425 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c726\" (UniqueName: \"kubernetes.io/projected/df96f6c9-141e-453a-a114-573d7604e8a7-kube-api-access-2c726\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df96f6c9-141e-453a-a114-573d7604e8a7-metallb-excludel2\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921553 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-metrics-certs\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.921636 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzqk\" (UniqueName: \"kubernetes.io/projected/1b312eb2-b256-4161-8f3c-dce680d9dbfc-kube-api-access-hgzqk\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.921972 4771 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.922062 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist podName:df96f6c9-141e-453a-a114-573d7604e8a7 nodeName:}" failed. No retries permitted until 2026-01-23 13:45:30.422041029 +0000 UTC m=+771.444578854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist") pod "speaker-jx8m8" (UID: "df96f6c9-141e-453a-a114-573d7604e8a7") : secret "metallb-memberlist" not found Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.922744 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/df96f6c9-141e-453a-a114-573d7604e8a7-metallb-excludel2\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.922836 4771 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 13:45:29 crc kubenswrapper[4771]: E0123 13:45:29.922893 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs podName:1b312eb2-b256-4161-8f3c-dce680d9dbfc nodeName:}" failed. No retries permitted until 2026-01-23 13:45:30.422877564 +0000 UTC m=+771.445415399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs") pod "controller-6968d8fdc4-ldmbx" (UID: "1b312eb2-b256-4161-8f3c-dce680d9dbfc") : secret "controller-certs-secret" not found Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.929109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-metrics-certs\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.941714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-cert\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.947647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c726\" (UniqueName: \"kubernetes.io/projected/df96f6c9-141e-453a-a114-573d7604e8a7-kube-api-access-2c726\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.953094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzqk\" (UniqueName: \"kubernetes.io/projected/1b312eb2-b256-4161-8f3c-dce680d9dbfc-kube-api-access-hgzqk\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:29 crc kubenswrapper[4771]: I0123 13:45:29.959827 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.311967 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.312312 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.312460 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.313005 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.313149 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef" gracePeriod=600 Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.330957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.334817 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-wgw65\" (UID: \"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.433018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.433066 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:30 crc kubenswrapper[4771]: E0123 13:45:30.433293 4771 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 13:45:30 crc kubenswrapper[4771]: E0123 13:45:30.433384 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist podName:df96f6c9-141e-453a-a114-573d7604e8a7 nodeName:}" failed. No retries permitted until 2026-01-23 13:45:31.433359463 +0000 UTC m=+772.455897268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist") pod "speaker-jx8m8" (UID: "df96f6c9-141e-453a-a114-573d7604e8a7") : secret "metallb-memberlist" not found Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.436034 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b312eb2-b256-4161-8f3c-dce680d9dbfc-metrics-certs\") pod \"controller-6968d8fdc4-ldmbx\" (UID: \"1b312eb2-b256-4161-8f3c-dce680d9dbfc\") " pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.602511 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef" exitCode=0 Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.602596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef"} Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.602989 4771 scope.go:117] "RemoveContainer" containerID="6b35dff9a089e9be4ffa5c4d273ac0e2dc94e3b914bdc6c34a5a9f3294cdefc7" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.604108 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"015d8b75ec788e881ee3cb208908fb207d4c8773e636e2aaf95bb1a206d45f6f"} Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.613481 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.685951 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:30 crc kubenswrapper[4771]: I0123 13:45:30.961185 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65"] Jan 23 13:45:30 crc kubenswrapper[4771]: W0123 13:45:30.964223 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef45af5a_2541_4e3b_92b3_6b09ca1ffcf2.slice/crio-f201aa0a161dc50493318ab62099709d8c6d92be492d4083300dc2f2d8ed8332 WatchSource:0}: Error finding container f201aa0a161dc50493318ab62099709d8c6d92be492d4083300dc2f2d8ed8332: Status 404 returned error can't find the container with id f201aa0a161dc50493318ab62099709d8c6d92be492d4083300dc2f2d8ed8332 Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.029987 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-ldmbx"] Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.448018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.458207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/df96f6c9-141e-453a-a114-573d7604e8a7-memberlist\") pod \"speaker-jx8m8\" (UID: \"df96f6c9-141e-453a-a114-573d7604e8a7\") " pod="metallb-system/speaker-jx8m8" Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.596262 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jx8m8" Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.615617 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d"} Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.624010 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ldmbx" event={"ID":"1b312eb2-b256-4161-8f3c-dce680d9dbfc","Type":"ContainerStarted","Data":"f19662b63ce3a9e2fc70c75d16add3ed295db27295d3e8fdf6b8a193f383bcdf"} Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.624056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ldmbx" event={"ID":"1b312eb2-b256-4161-8f3c-dce680d9dbfc","Type":"ContainerStarted","Data":"974905c82b79900eec66073ea8d8003cd6cf1e61a2a37204b777332fbab05683"} Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.624067 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-ldmbx" event={"ID":"1b312eb2-b256-4161-8f3c-dce680d9dbfc","Type":"ContainerStarted","Data":"2594ee52bd019da459edce4f0114c4957974fd6645d6f6e8c147261c5b786579"} Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.624946 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.633206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" event={"ID":"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2","Type":"ContainerStarted","Data":"f201aa0a161dc50493318ab62099709d8c6d92be492d4083300dc2f2d8ed8332"} Jan 23 13:45:31 crc kubenswrapper[4771]: W0123 13:45:31.633234 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf96f6c9_141e_453a_a114_573d7604e8a7.slice/crio-bf93ee09799b8e08626c98ee4f9ff96741fd1d8b89c1c6185ba34c9a60f7765e WatchSource:0}: Error finding container bf93ee09799b8e08626c98ee4f9ff96741fd1d8b89c1c6185ba34c9a60f7765e: Status 404 returned error can't find the container with id bf93ee09799b8e08626c98ee4f9ff96741fd1d8b89c1c6185ba34c9a60f7765e Jan 23 13:45:31 crc kubenswrapper[4771]: I0123 13:45:31.677740 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-ldmbx" podStartSLOduration=2.677716916 podStartE2EDuration="2.677716916s" podCreationTimestamp="2026-01-23 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:45:31.672445607 +0000 UTC m=+772.694983252" watchObservedRunningTime="2026-01-23 13:45:31.677716916 +0000 UTC m=+772.700254531" Jan 23 13:45:32 crc kubenswrapper[4771]: I0123 13:45:32.654927 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jx8m8" event={"ID":"df96f6c9-141e-453a-a114-573d7604e8a7","Type":"ContainerStarted","Data":"ed981028d1d1577e1d19068aca9cd0c0b43b32b46dfedd2eee44cfb8b70d759a"} Jan 23 13:45:32 crc kubenswrapper[4771]: I0123 13:45:32.655668 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jx8m8" event={"ID":"df96f6c9-141e-453a-a114-573d7604e8a7","Type":"ContainerStarted","Data":"8efe587b1374850fbeba8875b606d010ee5880612c8816b4e742e9e9eee92e1d"} Jan 23 13:45:32 crc kubenswrapper[4771]: I0123 13:45:32.655689 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jx8m8" event={"ID":"df96f6c9-141e-453a-a114-573d7604e8a7","Type":"ContainerStarted","Data":"bf93ee09799b8e08626c98ee4f9ff96741fd1d8b89c1c6185ba34c9a60f7765e"} Jan 23 13:45:32 crc kubenswrapper[4771]: I0123 13:45:32.655885 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jx8m8" Jan 23 13:45:32 crc kubenswrapper[4771]: I0123 13:45:32.696168 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jx8m8" podStartSLOduration=3.696145116 podStartE2EDuration="3.696145116s" podCreationTimestamp="2026-01-23 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:45:32.694799725 +0000 UTC m=+773.717337350" watchObservedRunningTime="2026-01-23 13:45:32.696145116 +0000 UTC m=+773.718682751" Jan 23 13:45:39 crc kubenswrapper[4771]: I0123 13:45:39.758841 4771 generic.go:334] "Generic (PLEG): container finished" podID="741dbcde-6dfb-4b44-89fb-f020af39320d" containerID="e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb" exitCode=0 Jan 23 13:45:39 crc kubenswrapper[4771]: I0123 13:45:39.758939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerDied","Data":"e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb"} Jan 23 13:45:39 crc kubenswrapper[4771]: I0123 13:45:39.764880 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" event={"ID":"ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2","Type":"ContainerStarted","Data":"0b9e2db2490ec618ae49dedcd03821f5db8e79c161e39bb14a1ca51199aa8b05"} Jan 23 13:45:39 crc kubenswrapper[4771]: I0123 13:45:39.765499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:39 crc kubenswrapper[4771]: I0123 13:45:39.809716 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" podStartSLOduration=2.705432224 podStartE2EDuration="10.809691661s" podCreationTimestamp="2026-01-23 13:45:29 +0000 UTC" firstStartedPulling="2026-01-23 13:45:30.966293491 +0000 UTC m=+771.988831116" lastFinishedPulling="2026-01-23 13:45:39.070552928 +0000 UTC m=+780.093090553" observedRunningTime="2026-01-23 13:45:39.80601614 +0000 UTC m=+780.828553785" watchObservedRunningTime="2026-01-23 13:45:39.809691661 +0000 UTC m=+780.832229286" Jan 23 13:45:40 crc kubenswrapper[4771]: I0123 13:45:40.773643 4771 generic.go:334] "Generic (PLEG): container finished" podID="741dbcde-6dfb-4b44-89fb-f020af39320d" containerID="d851d9eb1a64554908990a4c58d91d795c6fdf0fbf899884e7dfaba445969aab" exitCode=0 Jan 23 13:45:40 crc kubenswrapper[4771]: I0123 13:45:40.773718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerDied","Data":"d851d9eb1a64554908990a4c58d91d795c6fdf0fbf899884e7dfaba445969aab"} Jan 23 13:45:41 crc kubenswrapper[4771]: I0123 13:45:41.601870 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jx8m8" Jan 23 13:45:41 crc kubenswrapper[4771]: I0123 13:45:41.785056 4771 generic.go:334] "Generic (PLEG): container finished" podID="741dbcde-6dfb-4b44-89fb-f020af39320d" containerID="8beec22e920f392ad6c2ef9babf980815d3770ef57f05526870df50fb35ac335" exitCode=0 Jan 23 13:45:41 crc kubenswrapper[4771]: I0123 13:45:41.785099 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerDied","Data":"8beec22e920f392ad6c2ef9babf980815d3770ef57f05526870df50fb35ac335"} Jan 23 13:45:42 crc kubenswrapper[4771]: I0123 13:45:42.801189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"7db478d6e487c3793bc8104377e8cdad4e4fed9e5681ad8fd2a03eb068fca302"} Jan 23 13:45:42 crc kubenswrapper[4771]: I0123 13:45:42.801791 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"ec1bfa256f665d36e175d17e7cc3915dfc70f6179b64d29643b56c800808c10a"} Jan 23 13:45:42 crc kubenswrapper[4771]: I0123 13:45:42.801807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"c9f51012811a4df30ad3cf569b88ce62f4a4e6f4949ef6edea3f35833b55d503"} Jan 23 13:45:42 crc kubenswrapper[4771]: I0123 13:45:42.801820 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"d7db6fe260af456ea0cf9f456869587efec96bb35a04f687e243f1e703754a6d"} Jan 23 13:45:42 crc kubenswrapper[4771]: I0123 13:45:42.801831 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"31ef665a5098439188a87c9eec4f069fd38e4312646bcdee0e3e5b6441dff27d"} Jan 23 13:45:43 crc kubenswrapper[4771]: I0123 13:45:43.811547 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kglff" event={"ID":"741dbcde-6dfb-4b44-89fb-f020af39320d","Type":"ContainerStarted","Data":"ba99de7926f5f268355e8216aa736ba8cc9730e6c901fe30b83f3bcec8d37805"} Jan 23 13:45:43 crc kubenswrapper[4771]: I0123 13:45:43.812658 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:43 crc kubenswrapper[4771]: I0123 13:45:43.836277 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kglff" podStartSLOduration=5.834877457 podStartE2EDuration="14.836256026s" podCreationTimestamp="2026-01-23 13:45:29 +0000 UTC" firstStartedPulling="2026-01-23 13:45:30.09732647 +0000 UTC m=+771.119864095" lastFinishedPulling="2026-01-23 13:45:39.098705039 +0000 UTC m=+780.121242664" observedRunningTime="2026-01-23 13:45:43.831623582 +0000 UTC m=+784.854161207" watchObservedRunningTime="2026-01-23 13:45:43.836256026 +0000 UTC m=+784.858793641" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.603641 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.604705 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.607257 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.607563 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vwlb4" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.607738 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.616791 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.675562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49vs4\" (UniqueName: \"kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4\") pod \"openstack-operator-index-vzjrt\" (UID: \"6fb46933-0804-4ac1-b6f1-d326538500a2\") " pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.776444 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49vs4\" (UniqueName: \"kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4\") pod \"openstack-operator-index-vzjrt\" (UID: \"6fb46933-0804-4ac1-b6f1-d326538500a2\") " pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.809399 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49vs4\" (UniqueName: \"kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4\") pod \"openstack-operator-index-vzjrt\" (UID: \"6fb46933-0804-4ac1-b6f1-d326538500a2\") " pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.923265 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.960056 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:44 crc kubenswrapper[4771]: I0123 13:45:44.998860 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kglff" Jan 23 13:45:45 crc kubenswrapper[4771]: I0123 13:45:45.367633 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:45 crc kubenswrapper[4771]: I0123 13:45:45.830975 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vzjrt" event={"ID":"6fb46933-0804-4ac1-b6f1-d326538500a2","Type":"ContainerStarted","Data":"24d651d9ea05b6666590640721f7ab31c9728fc323668d95196913b0c6cf8098"} Jan 23 13:45:47 crc kubenswrapper[4771]: I0123 13:45:47.786286 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:47 crc kubenswrapper[4771]: I0123 13:45:47.849094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vzjrt" event={"ID":"6fb46933-0804-4ac1-b6f1-d326538500a2","Type":"ContainerStarted","Data":"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25"} Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.387005 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vzjrt" podStartSLOduration=2.309212023 podStartE2EDuration="4.386981455s" podCreationTimestamp="2026-01-23 13:45:44 +0000 UTC" firstStartedPulling="2026-01-23 13:45:45.383072351 +0000 UTC m=+786.405609976" lastFinishedPulling="2026-01-23 13:45:47.460841783 +0000 UTC m=+788.483379408" observedRunningTime="2026-01-23 13:45:47.867694777 +0000 UTC m=+788.890232422" watchObservedRunningTime="2026-01-23 13:45:48.386981455 +0000 UTC m=+789.409519080" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.392143 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-q7jx4"] Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.392988 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.405689 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q7jx4"] Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.527921 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4vf\" (UniqueName: \"kubernetes.io/projected/c714cbf2-6b46-41ca-8469-1d4ed6545e80-kube-api-access-jq4vf\") pod \"openstack-operator-index-q7jx4\" (UID: \"c714cbf2-6b46-41ca-8469-1d4ed6545e80\") " pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.629803 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4vf\" (UniqueName: \"kubernetes.io/projected/c714cbf2-6b46-41ca-8469-1d4ed6545e80-kube-api-access-jq4vf\") pod \"openstack-operator-index-q7jx4\" (UID: \"c714cbf2-6b46-41ca-8469-1d4ed6545e80\") " pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.657085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4vf\" (UniqueName: \"kubernetes.io/projected/c714cbf2-6b46-41ca-8469-1d4ed6545e80-kube-api-access-jq4vf\") pod \"openstack-operator-index-q7jx4\" (UID: \"c714cbf2-6b46-41ca-8469-1d4ed6545e80\") " pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.715252 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:48 crc kubenswrapper[4771]: I0123 13:45:48.862777 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vzjrt" podUID="6fb46933-0804-4ac1-b6f1-d326538500a2" containerName="registry-server" containerID="cri-o://99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25" gracePeriod=2 Jan 23 13:45:48 crc kubenswrapper[4771]: E0123 13:45:48.918273 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741dbcde_6dfb_4b44_89fb_f020af39320d.slice/crio-e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:45:49 crc kubenswrapper[4771]: W0123 13:45:49.157948 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc714cbf2_6b46_41ca_8469_1d4ed6545e80.slice/crio-ce63a733da2e7825ac5fca18113a8a5a3fa90d04d1938e6ab33305eeff4da587 WatchSource:0}: Error finding container ce63a733da2e7825ac5fca18113a8a5a3fa90d04d1938e6ab33305eeff4da587: Status 404 returned error can't find the container with id ce63a733da2e7825ac5fca18113a8a5a3fa90d04d1938e6ab33305eeff4da587 Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.163457 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-q7jx4"] Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.847725 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.877256 4771 generic.go:334] "Generic (PLEG): container finished" podID="6fb46933-0804-4ac1-b6f1-d326538500a2" containerID="99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25" exitCode=0 Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.877368 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vzjrt" event={"ID":"6fb46933-0804-4ac1-b6f1-d326538500a2","Type":"ContainerDied","Data":"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25"} Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.877439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vzjrt" event={"ID":"6fb46933-0804-4ac1-b6f1-d326538500a2","Type":"ContainerDied","Data":"24d651d9ea05b6666590640721f7ab31c9728fc323668d95196913b0c6cf8098"} Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.877477 4771 scope.go:117] "RemoveContainer" containerID="99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25" Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.877699 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vzjrt" Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.880285 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7jx4" event={"ID":"c714cbf2-6b46-41ca-8469-1d4ed6545e80","Type":"ContainerStarted","Data":"22ae03b88dd5fd96b72ce9a47f84bafe937630a9b539333765d8b228be3e27f5"} Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.880592 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-q7jx4" event={"ID":"c714cbf2-6b46-41ca-8469-1d4ed6545e80","Type":"ContainerStarted","Data":"ce63a733da2e7825ac5fca18113a8a5a3fa90d04d1938e6ab33305eeff4da587"} Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.908192 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-q7jx4" podStartSLOduration=1.845924219 podStartE2EDuration="1.908161951s" podCreationTimestamp="2026-01-23 13:45:48 +0000 UTC" firstStartedPulling="2026-01-23 13:45:49.162850679 +0000 UTC m=+790.185388294" lastFinishedPulling="2026-01-23 13:45:49.225088401 +0000 UTC m=+790.247626026" observedRunningTime="2026-01-23 13:45:49.898123203 +0000 UTC m=+790.920660838" watchObservedRunningTime="2026-01-23 13:45:49.908161951 +0000 UTC m=+790.930699596" Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.917569 4771 scope.go:117] "RemoveContainer" containerID="99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25" Jan 23 13:45:49 crc kubenswrapper[4771]: E0123 13:45:49.918148 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25\": container with ID starting with 99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25 not found: ID does not exist" containerID="99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25" Jan 23 13:45:49 crc kubenswrapper[4771]: I0123 13:45:49.918185 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25"} err="failed to get container status \"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25\": rpc error: code = NotFound desc = could not find container \"99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25\": container with ID starting with 99c482d9f96167089805663664b5777068ac6bf5e3bb9562316db5da7970ca25 not found: ID does not exist" Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.050331 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49vs4\" (UniqueName: \"kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4\") pod \"6fb46933-0804-4ac1-b6f1-d326538500a2\" (UID: \"6fb46933-0804-4ac1-b6f1-d326538500a2\") " Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.058751 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4" (OuterVolumeSpecName: "kube-api-access-49vs4") pod "6fb46933-0804-4ac1-b6f1-d326538500a2" (UID: "6fb46933-0804-4ac1-b6f1-d326538500a2"). InnerVolumeSpecName "kube-api-access-49vs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.151752 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49vs4\" (UniqueName: \"kubernetes.io/projected/6fb46933-0804-4ac1-b6f1-d326538500a2-kube-api-access-49vs4\") on node \"crc\" DevicePath \"\"" Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.215614 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.219809 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vzjrt"] Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.621003 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-wgw65" Jan 23 13:45:50 crc kubenswrapper[4771]: I0123 13:45:50.694058 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-ldmbx" Jan 23 13:45:51 crc kubenswrapper[4771]: I0123 13:45:51.236554 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb46933-0804-4ac1-b6f1-d326538500a2" path="/var/lib/kubelet/pods/6fb46933-0804-4ac1-b6f1-d326538500a2/volumes" Jan 23 13:45:58 crc kubenswrapper[4771]: I0123 13:45:58.716133 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:58 crc kubenswrapper[4771]: I0123 13:45:58.716818 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:58 crc kubenswrapper[4771]: I0123 13:45:58.744080 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:58 crc kubenswrapper[4771]: I0123 13:45:58.968553 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-q7jx4" Jan 23 13:45:59 crc kubenswrapper[4771]: E0123 13:45:59.088838 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741dbcde_6dfb_4b44_89fb_f020af39320d.slice/crio-e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:45:59 crc kubenswrapper[4771]: I0123 13:45:59.962987 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kglff" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.799910 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6"] Jan 23 13:46:05 crc kubenswrapper[4771]: E0123 13:46:05.800278 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb46933-0804-4ac1-b6f1-d326538500a2" containerName="registry-server" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.800296 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb46933-0804-4ac1-b6f1-d326538500a2" containerName="registry-server" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.800505 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb46933-0804-4ac1-b6f1-d326538500a2" containerName="registry-server" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.801580 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.803445 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-k4m8k" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.814702 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6"] Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.879065 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.879151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcz7l\" (UniqueName: \"kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.879175 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.980221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.980314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.980382 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcz7l\" (UniqueName: \"kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.981159 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:05 crc kubenswrapper[4771]: I0123 13:46:05.981275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:06 crc kubenswrapper[4771]: I0123 13:46:06.002796 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcz7l\" (UniqueName: \"kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l\") pod \"474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:06 crc kubenswrapper[4771]: I0123 13:46:06.121186 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:06 crc kubenswrapper[4771]: I0123 13:46:06.643183 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6"] Jan 23 13:46:06 crc kubenswrapper[4771]: I0123 13:46:06.990772 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" event={"ID":"d6ed33f7-5653-4575-9457-22ec51d0e961","Type":"ContainerStarted","Data":"2d422eb37530fcdceb88078bb5dbdcaff07a8143dc6e69a3b8e299e0725a86d1"} Jan 23 13:46:09 crc kubenswrapper[4771]: I0123 13:46:09.008714 4771 generic.go:334] "Generic (PLEG): container finished" podID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerID="29bc8aabfeddaaa7051f26c4020b05ece4380a8a27c57c236d1bacdf9c1a1a1e" exitCode=0 Jan 23 13:46:09 crc kubenswrapper[4771]: I0123 13:46:09.008905 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" event={"ID":"d6ed33f7-5653-4575-9457-22ec51d0e961","Type":"ContainerDied","Data":"29bc8aabfeddaaa7051f26c4020b05ece4380a8a27c57c236d1bacdf9c1a1a1e"} Jan 23 13:46:09 crc kubenswrapper[4771]: E0123 13:46:09.242453 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741dbcde_6dfb_4b44_89fb_f020af39320d.slice/crio-e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:46:10 crc kubenswrapper[4771]: I0123 13:46:10.017010 4771 generic.go:334] "Generic (PLEG): container finished" podID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerID="294eeec52001a8388b5717694fd578f9263ed5a64c4fbfecd3012608893a7a24" exitCode=0 Jan 23 13:46:10 crc kubenswrapper[4771]: I0123 13:46:10.017121 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" event={"ID":"d6ed33f7-5653-4575-9457-22ec51d0e961","Type":"ContainerDied","Data":"294eeec52001a8388b5717694fd578f9263ed5a64c4fbfecd3012608893a7a24"} Jan 23 13:46:11 crc kubenswrapper[4771]: I0123 13:46:11.029253 4771 generic.go:334] "Generic (PLEG): container finished" podID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerID="7081b3796df20d1ceeadd172ba2b7fa3ea86be134106a58990d5b6a7542e134c" exitCode=0 Jan 23 13:46:11 crc kubenswrapper[4771]: I0123 13:46:11.029311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" event={"ID":"d6ed33f7-5653-4575-9457-22ec51d0e961","Type":"ContainerDied","Data":"7081b3796df20d1ceeadd172ba2b7fa3ea86be134106a58990d5b6a7542e134c"} Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.286583 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.475206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcz7l\" (UniqueName: \"kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l\") pod \"d6ed33f7-5653-4575-9457-22ec51d0e961\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.475358 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle\") pod \"d6ed33f7-5653-4575-9457-22ec51d0e961\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.475503 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util\") pod \"d6ed33f7-5653-4575-9457-22ec51d0e961\" (UID: \"d6ed33f7-5653-4575-9457-22ec51d0e961\") " Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.476298 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle" (OuterVolumeSpecName: "bundle") pod "d6ed33f7-5653-4575-9457-22ec51d0e961" (UID: "d6ed33f7-5653-4575-9457-22ec51d0e961"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.482651 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l" (OuterVolumeSpecName: "kube-api-access-hcz7l") pod "d6ed33f7-5653-4575-9457-22ec51d0e961" (UID: "d6ed33f7-5653-4575-9457-22ec51d0e961"). InnerVolumeSpecName "kube-api-access-hcz7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.491113 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util" (OuterVolumeSpecName: "util") pod "d6ed33f7-5653-4575-9457-22ec51d0e961" (UID: "d6ed33f7-5653-4575-9457-22ec51d0e961"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.577282 4771 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.577343 4771 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6ed33f7-5653-4575-9457-22ec51d0e961-util\") on node \"crc\" DevicePath \"\"" Jan 23 13:46:12 crc kubenswrapper[4771]: I0123 13:46:12.577359 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcz7l\" (UniqueName: \"kubernetes.io/projected/d6ed33f7-5653-4575-9457-22ec51d0e961-kube-api-access-hcz7l\") on node \"crc\" DevicePath \"\"" Jan 23 13:46:13 crc kubenswrapper[4771]: I0123 13:46:13.045711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" event={"ID":"d6ed33f7-5653-4575-9457-22ec51d0e961","Type":"ContainerDied","Data":"2d422eb37530fcdceb88078bb5dbdcaff07a8143dc6e69a3b8e299e0725a86d1"} Jan 23 13:46:13 crc kubenswrapper[4771]: I0123 13:46:13.045773 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d422eb37530fcdceb88078bb5dbdcaff07a8143dc6e69a3b8e299e0725a86d1" Jan 23 13:46:13 crc kubenswrapper[4771]: I0123 13:46:13.045886 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.970296 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l"] Jan 23 13:46:17 crc kubenswrapper[4771]: E0123 13:46:17.971008 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="util" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.971020 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="util" Jan 23 13:46:17 crc kubenswrapper[4771]: E0123 13:46:17.971035 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="pull" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.971041 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="pull" Jan 23 13:46:17 crc kubenswrapper[4771]: E0123 13:46:17.971052 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="extract" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.971058 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="extract" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.971178 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ed33f7-5653-4575-9457-22ec51d0e961" containerName="extract" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.971643 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.973543 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-djzmb" Jan 23 13:46:17 crc kubenswrapper[4771]: I0123 13:46:17.993635 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l"] Jan 23 13:46:18 crc kubenswrapper[4771]: I0123 13:46:18.149072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdlkn\" (UniqueName: \"kubernetes.io/projected/b1cee457-8610-42c9-be6e-3cf0f8628aba-kube-api-access-jdlkn\") pod \"openstack-operator-controller-init-5c9f89db4c-99h7l\" (UID: \"b1cee457-8610-42c9-be6e-3cf0f8628aba\") " pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:18 crc kubenswrapper[4771]: I0123 13:46:18.250872 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdlkn\" (UniqueName: \"kubernetes.io/projected/b1cee457-8610-42c9-be6e-3cf0f8628aba-kube-api-access-jdlkn\") pod \"openstack-operator-controller-init-5c9f89db4c-99h7l\" (UID: \"b1cee457-8610-42c9-be6e-3cf0f8628aba\") " pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:18 crc kubenswrapper[4771]: I0123 13:46:18.270363 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdlkn\" (UniqueName: \"kubernetes.io/projected/b1cee457-8610-42c9-be6e-3cf0f8628aba-kube-api-access-jdlkn\") pod \"openstack-operator-controller-init-5c9f89db4c-99h7l\" (UID: \"b1cee457-8610-42c9-be6e-3cf0f8628aba\") " pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:18 crc kubenswrapper[4771]: I0123 13:46:18.292740 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:18 crc kubenswrapper[4771]: I0123 13:46:18.723686 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l"] Jan 23 13:46:18 crc kubenswrapper[4771]: W0123 13:46:18.732654 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1cee457_8610_42c9_be6e_3cf0f8628aba.slice/crio-5e13b3e8a4aef8da59ff2b14d8dd77c522d4e0d09c76e26def5dc8d214ea3497 WatchSource:0}: Error finding container 5e13b3e8a4aef8da59ff2b14d8dd77c522d4e0d09c76e26def5dc8d214ea3497: Status 404 returned error can't find the container with id 5e13b3e8a4aef8da59ff2b14d8dd77c522d4e0d09c76e26def5dc8d214ea3497 Jan 23 13:46:19 crc kubenswrapper[4771]: I0123 13:46:19.084451 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" event={"ID":"b1cee457-8610-42c9-be6e-3cf0f8628aba","Type":"ContainerStarted","Data":"5e13b3e8a4aef8da59ff2b14d8dd77c522d4e0d09c76e26def5dc8d214ea3497"} Jan 23 13:46:19 crc kubenswrapper[4771]: E0123 13:46:19.412637 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741dbcde_6dfb_4b44_89fb_f020af39320d.slice/crio-e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:46:25 crc kubenswrapper[4771]: I0123 13:46:25.146599 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" event={"ID":"b1cee457-8610-42c9-be6e-3cf0f8628aba","Type":"ContainerStarted","Data":"51fd9e08e91892574a30741c2cf41d63d6f3f967a0b212d523bd4ca0839ebaa7"} Jan 23 13:46:25 crc kubenswrapper[4771]: I0123 13:46:25.147299 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:46:25 crc kubenswrapper[4771]: I0123 13:46:25.179057 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" podStartSLOduration=2.324965903 podStartE2EDuration="8.179032938s" podCreationTimestamp="2026-01-23 13:46:17 +0000 UTC" firstStartedPulling="2026-01-23 13:46:18.735319171 +0000 UTC m=+819.757856796" lastFinishedPulling="2026-01-23 13:46:24.589386206 +0000 UTC m=+825.611923831" observedRunningTime="2026-01-23 13:46:25.175729333 +0000 UTC m=+826.198266978" watchObservedRunningTime="2026-01-23 13:46:25.179032938 +0000 UTC m=+826.201570573" Jan 23 13:46:29 crc kubenswrapper[4771]: E0123 13:46:29.578457 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741dbcde_6dfb_4b44_89fb_f020af39320d.slice/crio-e036fc3f99c9f214158c649108470951e0b1514647bc9d72a08ac96eb77a20fb.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:46:38 crc kubenswrapper[4771]: I0123 13:46:38.295966 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c9f89db4c-99h7l" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.719745 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.722778 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.725007 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4wfsn" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.727832 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.729081 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.730631 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bjfxn" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.741525 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.750638 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.752252 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.761364 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.767455 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xxl7d" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.769962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdbzd\" (UniqueName: \"kubernetes.io/projected/a6bb27ef-c367-4c44-9137-7e713f44271d-kube-api-access-tdbzd\") pod \"barbican-operator-controller-manager-7f86f8796f-9pg25\" (UID: \"a6bb27ef-c367-4c44-9137-7e713f44271d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.770078 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24fw\" (UniqueName: \"kubernetes.io/projected/e5c92a50-6224-413e-b4ca-9bdca838de01-kube-api-access-q24fw\") pod \"cinder-operator-controller-manager-69cf5d4557-p4s4m\" (UID: \"e5c92a50-6224-413e-b4ca-9bdca838de01\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.788489 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.801753 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.849680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.849768 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hp547" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.878618 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdbzd\" (UniqueName: \"kubernetes.io/projected/a6bb27ef-c367-4c44-9137-7e713f44271d-kube-api-access-tdbzd\") pod \"barbican-operator-controller-manager-7f86f8796f-9pg25\" (UID: \"a6bb27ef-c367-4c44-9137-7e713f44271d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.878795 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjg4\" (UniqueName: \"kubernetes.io/projected/f27f9a1d-08bf-4576-90e8-0d5e9438b3d7-kube-api-access-rbjg4\") pod \"glance-operator-controller-manager-78fdd796fd-g7vwj\" (UID: \"f27f9a1d-08bf-4576-90e8-0d5e9438b3d7\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.878902 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbmmb\" (UniqueName: \"kubernetes.io/projected/7449c4dc-9594-459b-9e89-23cb5e86139b-kube-api-access-rbmmb\") pod \"designate-operator-controller-manager-b45d7bf98-qtsk2\" (UID: \"7449c4dc-9594-459b-9e89-23cb5e86139b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.879040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24fw\" (UniqueName: \"kubernetes.io/projected/e5c92a50-6224-413e-b4ca-9bdca838de01-kube-api-access-q24fw\") pod \"cinder-operator-controller-manager-69cf5d4557-p4s4m\" (UID: \"e5c92a50-6224-413e-b4ca-9bdca838de01\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.925524 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.941493 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.943095 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.946472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24fw\" (UniqueName: \"kubernetes.io/projected/e5c92a50-6224-413e-b4ca-9bdca838de01-kube-api-access-q24fw\") pod \"cinder-operator-controller-manager-69cf5d4557-p4s4m\" (UID: \"e5c92a50-6224-413e-b4ca-9bdca838de01\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.949214 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xdnzb" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.964372 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdbzd\" (UniqueName: \"kubernetes.io/projected/a6bb27ef-c367-4c44-9137-7e713f44271d-kube-api-access-tdbzd\") pod \"barbican-operator-controller-manager-7f86f8796f-9pg25\" (UID: \"a6bb27ef-c367-4c44-9137-7e713f44271d\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.979227 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.980899 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbs8m\" (UniqueName: \"kubernetes.io/projected/a3ff2047-0c52-4dee-a435-c88cb8c2690d-kube-api-access-nbs8m\") pod \"horizon-operator-controller-manager-77d5c5b54f-x7skn\" (UID: \"a3ff2047-0c52-4dee-a435-c88cb8c2690d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.981093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbmmb\" (UniqueName: \"kubernetes.io/projected/7449c4dc-9594-459b-9e89-23cb5e86139b-kube-api-access-rbmmb\") pod \"designate-operator-controller-manager-b45d7bf98-qtsk2\" (UID: \"7449c4dc-9594-459b-9e89-23cb5e86139b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.981290 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbjg4\" (UniqueName: \"kubernetes.io/projected/f27f9a1d-08bf-4576-90e8-0d5e9438b3d7-kube-api-access-rbjg4\") pod \"glance-operator-controller-manager-78fdd796fd-g7vwj\" (UID: \"f27f9a1d-08bf-4576-90e8-0d5e9438b3d7\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.980929 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.986033 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-qnhl5" Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.986284 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk"] Jan 23 13:47:06 crc kubenswrapper[4771]: I0123 13:47:06.987480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.000834 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-72w5z" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.001215 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.029164 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbmmb\" (UniqueName: \"kubernetes.io/projected/7449c4dc-9594-459b-9e89-23cb5e86139b-kube-api-access-rbmmb\") pod \"designate-operator-controller-manager-b45d7bf98-qtsk2\" (UID: \"7449c4dc-9594-459b-9e89-23cb5e86139b\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.045254 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.045909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbjg4\" (UniqueName: \"kubernetes.io/projected/f27f9a1d-08bf-4576-90e8-0d5e9438b3d7-kube-api-access-rbjg4\") pod \"glance-operator-controller-manager-78fdd796fd-g7vwj\" (UID: \"f27f9a1d-08bf-4576-90e8-0d5e9438b3d7\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.058109 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.066214 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.083615 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzqp\" (UniqueName: \"kubernetes.io/projected/b4af8681-84d1-4cf7-b3a6-b167146e1973-kube-api-access-whzqp\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.083706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbs8m\" (UniqueName: \"kubernetes.io/projected/a3ff2047-0c52-4dee-a435-c88cb8c2690d-kube-api-access-nbs8m\") pod \"horizon-operator-controller-manager-77d5c5b54f-x7skn\" (UID: \"a3ff2047-0c52-4dee-a435-c88cb8c2690d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.083765 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.083816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcj25\" (UniqueName: \"kubernetes.io/projected/02c8537d-9470-4887-9fb9-0700448bbc40-kube-api-access-tcj25\") pod \"heat-operator-controller-manager-594c8c9d5d-br5g2\" (UID: \"02c8537d-9470-4887-9fb9-0700448bbc40\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.085155 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.086538 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.087945 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.104881 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qx589" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.138947 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbs8m\" (UniqueName: \"kubernetes.io/projected/a3ff2047-0c52-4dee-a435-c88cb8c2690d-kube-api-access-nbs8m\") pod \"horizon-operator-controller-manager-77d5c5b54f-x7skn\" (UID: \"a3ff2047-0c52-4dee-a435-c88cb8c2690d\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.146372 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.177797 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.185860 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.185963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcj25\" (UniqueName: \"kubernetes.io/projected/02c8537d-9470-4887-9fb9-0700448bbc40-kube-api-access-tcj25\") pod \"heat-operator-controller-manager-594c8c9d5d-br5g2\" (UID: \"02c8537d-9470-4887-9fb9-0700448bbc40\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.186004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzqp\" (UniqueName: \"kubernetes.io/projected/b4af8681-84d1-4cf7-b3a6-b167146e1973-kube-api-access-whzqp\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.186051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfps\" (UniqueName: \"kubernetes.io/projected/5d096797-513e-4b08-afe9-0c19eb099a3d-kube-api-access-krfps\") pod \"ironic-operator-controller-manager-598f7747c9-l2gvh\" (UID: \"5d096797-513e-4b08-afe9-0c19eb099a3d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.186764 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.186839 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:07.686816764 +0000 UTC m=+868.709354389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.224129 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.256834 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcj25\" (UniqueName: \"kubernetes.io/projected/02c8537d-9470-4887-9fb9-0700448bbc40-kube-api-access-tcj25\") pod \"heat-operator-controller-manager-594c8c9d5d-br5g2\" (UID: \"02c8537d-9470-4887-9fb9-0700448bbc40\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.285348 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.286205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.287537 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krfps\" (UniqueName: \"kubernetes.io/projected/5d096797-513e-4b08-afe9-0c19eb099a3d-kube-api-access-krfps\") pod \"ironic-operator-controller-manager-598f7747c9-l2gvh\" (UID: \"5d096797-513e-4b08-afe9-0c19eb099a3d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.291332 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.291525 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mlbxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.332518 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzqp\" (UniqueName: \"kubernetes.io/projected/b4af8681-84d1-4cf7-b3a6-b167146e1973-kube-api-access-whzqp\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.333060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krfps\" (UniqueName: \"kubernetes.io/projected/5d096797-513e-4b08-afe9-0c19eb099a3d-kube-api-access-krfps\") pod \"ironic-operator-controller-manager-598f7747c9-l2gvh\" (UID: \"5d096797-513e-4b08-afe9-0c19eb099a3d\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.347653 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.349264 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.374765 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ws958" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.391504 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.392583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl4wx\" (UniqueName: \"kubernetes.io/projected/8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570-kube-api-access-jl4wx\") pod \"keystone-operator-controller-manager-b8b6d4659-pvkv5\" (UID: \"8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.392716 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqx6p\" (UniqueName: \"kubernetes.io/projected/368370f7-de60-484e-8ad6-35d0298c2520-kube-api-access-qqx6p\") pod \"manila-operator-controller-manager-78c6999f6f-7tzww\" (UID: \"368370f7-de60-484e-8ad6-35d0298c2520\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.406889 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.465475 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.477773 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.478807 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.483567 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.485051 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.487627 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hrfrt" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.500602 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.501824 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl4wx\" (UniqueName: \"kubernetes.io/projected/8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570-kube-api-access-jl4wx\") pod \"keystone-operator-controller-manager-b8b6d4659-pvkv5\" (UID: \"8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.501897 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqx6p\" (UniqueName: \"kubernetes.io/projected/368370f7-de60-484e-8ad6-35d0298c2520-kube-api-access-qqx6p\") pod \"manila-operator-controller-manager-78c6999f6f-7tzww\" (UID: \"368370f7-de60-484e-8ad6-35d0298c2520\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.501945 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hdq\" (UniqueName: \"kubernetes.io/projected/2e2e8c05-b33d-410f-ad27-e80ed0a243ee-kube-api-access-m9hdq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn\" (UID: \"2e2e8c05-b33d-410f-ad27-e80ed0a243ee\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.502685 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-74xjn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.506261 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.507334 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.513975 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qcbrx" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.521474 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.524889 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.535527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqx6p\" (UniqueName: \"kubernetes.io/projected/368370f7-de60-484e-8ad6-35d0298c2520-kube-api-access-qqx6p\") pod \"manila-operator-controller-manager-78c6999f6f-7tzww\" (UID: \"368370f7-de60-484e-8ad6-35d0298c2520\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.541544 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.542911 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.547109 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.560526 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.561668 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lw4s4" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.569139 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.587352 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.591027 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.593282 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tsqvp" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.630479 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.634688 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.637872 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc67m\" (UniqueName: \"kubernetes.io/projected/ec9af354-7d56-47ee-aa2f-be57edf2c7bc-kube-api-access-tc67m\") pod \"neutron-operator-controller-manager-78d58447c5-c9h8l\" (UID: \"ec9af354-7d56-47ee-aa2f-be57edf2c7bc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.638074 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9hdq\" (UniqueName: \"kubernetes.io/projected/2e2e8c05-b33d-410f-ad27-e80ed0a243ee-kube-api-access-m9hdq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn\" (UID: \"2e2e8c05-b33d-410f-ad27-e80ed0a243ee\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.638237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjf67\" (UniqueName: \"kubernetes.io/projected/d703eb08-df59-4676-a522-c869982a8772-kube-api-access-bjf67\") pod \"octavia-operator-controller-manager-7bd9774b6-89prd\" (UID: \"d703eb08-df59-4676-a522-c869982a8772\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.639831 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rvw\" (UniqueName: \"kubernetes.io/projected/06cccb54-4ed2-4ee7-af3e-c26532e49b23-kube-api-access-85rvw\") pod \"nova-operator-controller-manager-6b8bc8d87d-xpp8r\" (UID: \"06cccb54-4ed2-4ee7-af3e-c26532e49b23\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.653998 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.654222 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7c87h" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.669964 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.670711 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.671315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl4wx\" (UniqueName: \"kubernetes.io/projected/8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570-kube-api-access-jl4wx\") pod \"keystone-operator-controller-manager-b8b6d4659-pvkv5\" (UID: \"8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.693090 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.721075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.719082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9hdq\" (UniqueName: \"kubernetes.io/projected/2e2e8c05-b33d-410f-ad27-e80ed0a243ee-kube-api-access-m9hdq\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn\" (UID: \"2e2e8c05-b33d-410f-ad27-e80ed0a243ee\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.746119 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.746258 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.746300 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:08.746285898 +0000 UTC m=+869.768823523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.746876 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjf67\" (UniqueName: \"kubernetes.io/projected/d703eb08-df59-4676-a522-c869982a8772-kube-api-access-bjf67\") pod \"octavia-operator-controller-manager-7bd9774b6-89prd\" (UID: \"d703eb08-df59-4676-a522-c869982a8772\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.747741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rvw\" (UniqueName: \"kubernetes.io/projected/06cccb54-4ed2-4ee7-af3e-c26532e49b23-kube-api-access-85rvw\") pod \"nova-operator-controller-manager-6b8bc8d87d-xpp8r\" (UID: \"06cccb54-4ed2-4ee7-af3e-c26532e49b23\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.748021 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.748110 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfdp\" (UniqueName: \"kubernetes.io/projected/4e79abf5-0755-4fec-998c-b4eba8ebe531-kube-api-access-qvfdp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.748329 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc67m\" (UniqueName: \"kubernetes.io/projected/ec9af354-7d56-47ee-aa2f-be57edf2c7bc-kube-api-access-tc67m\") pod \"neutron-operator-controller-manager-78d58447c5-c9h8l\" (UID: \"ec9af354-7d56-47ee-aa2f-be57edf2c7bc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.748520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566lk\" (UniqueName: \"kubernetes.io/projected/55997410-62dd-4965-938a-6e1cdfba0cd5-kube-api-access-566lk\") pod \"ovn-operator-controller-manager-55db956ddc-sx4v9\" (UID: \"55997410-62dd-4965-938a-6e1cdfba0cd5\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.781641 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.783707 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.788432 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-b9lv2" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.804866 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.810105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjf67\" (UniqueName: \"kubernetes.io/projected/d703eb08-df59-4676-a522-c869982a8772-kube-api-access-bjf67\") pod \"octavia-operator-controller-manager-7bd9774b6-89prd\" (UID: \"d703eb08-df59-4676-a522-c869982a8772\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.818879 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rvw\" (UniqueName: \"kubernetes.io/projected/06cccb54-4ed2-4ee7-af3e-c26532e49b23-kube-api-access-85rvw\") pod \"nova-operator-controller-manager-6b8bc8d87d-xpp8r\" (UID: \"06cccb54-4ed2-4ee7-af3e-c26532e49b23\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.822205 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc67m\" (UniqueName: \"kubernetes.io/projected/ec9af354-7d56-47ee-aa2f-be57edf2c7bc-kube-api-access-tc67m\") pod \"neutron-operator-controller-manager-78d58447c5-c9h8l\" (UID: \"ec9af354-7d56-47ee-aa2f-be57edf2c7bc\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.829154 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.830239 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.833956 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rghlg" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.838643 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.850251 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvfdp\" (UniqueName: \"kubernetes.io/projected/4e79abf5-0755-4fec-998c-b4eba8ebe531-kube-api-access-qvfdp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.850320 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566lk\" (UniqueName: \"kubernetes.io/projected/55997410-62dd-4965-938a-6e1cdfba0cd5-kube-api-access-566lk\") pod \"ovn-operator-controller-manager-55db956ddc-sx4v9\" (UID: \"55997410-62dd-4965-938a-6e1cdfba0cd5\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.850472 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.850506 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh4vt\" (UniqueName: \"kubernetes.io/projected/6ac8c39a-105d-4ab1-afd6-4786c9aa1386-kube-api-access-bh4vt\") pod \"placement-operator-controller-manager-5d646b7d76-dgqzc\" (UID: \"6ac8c39a-105d-4ab1-afd6-4786c9aa1386\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.850998 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: E0123 13:47:07.851055 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:08.351035455 +0000 UTC m=+869.373573080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.865589 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.865932 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.867057 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.877529 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6nhsp" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.880773 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.882210 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.887434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvfdp\" (UniqueName: \"kubernetes.io/projected/4e79abf5-0755-4fec-998c-b4eba8ebe531-kube-api-access-qvfdp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.887519 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.889237 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.890224 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.897692 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6xd8x" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.897953 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-t5xzs" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.898877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566lk\" (UniqueName: \"kubernetes.io/projected/55997410-62dd-4965-938a-6e1cdfba0cd5-kube-api-access-566lk\") pod \"ovn-operator-controller-manager-55db956ddc-sx4v9\" (UID: \"55997410-62dd-4965-938a-6e1cdfba0cd5\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.899041 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.905199 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.931928 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.933070 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.935949 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.936217 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8rgxx" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.936339 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.951926 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.952144 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7rn\" (UniqueName: \"kubernetes.io/projected/25f36fa7-ebf2-406b-bb09-f3a83fd19685-kube-api-access-9l7rn\") pod \"swift-operator-controller-manager-547cbdb99f-m76mt\" (UID: \"25f36fa7-ebf2-406b-bb09-f3a83fd19685\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.952185 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmbz\" (UniqueName: \"kubernetes.io/projected/763a3e24-643d-473f-bbb2-d7f4816a0b58-kube-api-access-dsmbz\") pod \"telemetry-operator-controller-manager-85cd9769bb-5s27m\" (UID: \"763a3e24-643d-473f-bbb2-d7f4816a0b58\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.952219 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwck8\" (UniqueName: \"kubernetes.io/projected/6c48b549-8b0b-4914-be38-157d50994b3b-kube-api-access-zwck8\") pod \"test-operator-controller-manager-69797bbcbd-nm9jp\" (UID: \"6c48b549-8b0b-4914-be38-157d50994b3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.952294 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48jqc\" (UniqueName: \"kubernetes.io/projected/eb45ce0a-9090-4553-b2b3-6d025d099f0f-kube-api-access-48jqc\") pod \"watcher-operator-controller-manager-78dbdc4d57-djf7x\" (UID: \"eb45ce0a-9090-4553-b2b3-6d025d099f0f\") " pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.952322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh4vt\" (UniqueName: \"kubernetes.io/projected/6ac8c39a-105d-4ab1-afd6-4786c9aa1386-kube-api-access-bh4vt\") pod \"placement-operator-controller-manager-5d646b7d76-dgqzc\" (UID: \"6ac8c39a-105d-4ab1-afd6-4786c9aa1386\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.953972 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.974315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh4vt\" (UniqueName: \"kubernetes.io/projected/6ac8c39a-105d-4ab1-afd6-4786c9aa1386-kube-api-access-bh4vt\") pod \"placement-operator-controller-manager-5d646b7d76-dgqzc\" (UID: \"6ac8c39a-105d-4ab1-afd6-4786c9aa1386\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.983449 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc"] Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.984384 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.987261 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r7lhd" Jan 23 13:47:07 crc kubenswrapper[4771]: I0123 13:47:07.990979 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:07.999572 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.022427 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.036888 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054677 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054745 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l7rn\" (UniqueName: \"kubernetes.io/projected/25f36fa7-ebf2-406b-bb09-f3a83fd19685-kube-api-access-9l7rn\") pod \"swift-operator-controller-manager-547cbdb99f-m76mt\" (UID: \"25f36fa7-ebf2-406b-bb09-f3a83fd19685\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsmbz\" (UniqueName: \"kubernetes.io/projected/763a3e24-643d-473f-bbb2-d7f4816a0b58-kube-api-access-dsmbz\") pod \"telemetry-operator-controller-manager-85cd9769bb-5s27m\" (UID: \"763a3e24-643d-473f-bbb2-d7f4816a0b58\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z9s6\" (UniqueName: \"kubernetes.io/projected/08c96bfa-007d-41c5-a03a-4e92c9083c3f-kube-api-access-6z9s6\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054856 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwck8\" (UniqueName: \"kubernetes.io/projected/6c48b549-8b0b-4914-be38-157d50994b3b-kube-api-access-zwck8\") pod \"test-operator-controller-manager-69797bbcbd-nm9jp\" (UID: \"6c48b549-8b0b-4914-be38-157d50994b3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.054958 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48jqc\" (UniqueName: \"kubernetes.io/projected/eb45ce0a-9090-4553-b2b3-6d025d099f0f-kube-api-access-48jqc\") pod \"watcher-operator-controller-manager-78dbdc4d57-djf7x\" (UID: \"eb45ce0a-9090-4553-b2b3-6d025d099f0f\") " pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.055013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4m4v\" (UniqueName: \"kubernetes.io/projected/02ecde71-15d7-4cf0-8928-505b2f0899fd-kube-api-access-r4m4v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vn6bc\" (UID: \"02ecde71-15d7-4cf0-8928-505b2f0899fd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.074062 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l7rn\" (UniqueName: \"kubernetes.io/projected/25f36fa7-ebf2-406b-bb09-f3a83fd19685-kube-api-access-9l7rn\") pod \"swift-operator-controller-manager-547cbdb99f-m76mt\" (UID: \"25f36fa7-ebf2-406b-bb09-f3a83fd19685\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.085048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwck8\" (UniqueName: \"kubernetes.io/projected/6c48b549-8b0b-4914-be38-157d50994b3b-kube-api-access-zwck8\") pod \"test-operator-controller-manager-69797bbcbd-nm9jp\" (UID: \"6c48b549-8b0b-4914-be38-157d50994b3b\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.088714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsmbz\" (UniqueName: \"kubernetes.io/projected/763a3e24-643d-473f-bbb2-d7f4816a0b58-kube-api-access-dsmbz\") pod \"telemetry-operator-controller-manager-85cd9769bb-5s27m\" (UID: \"763a3e24-643d-473f-bbb2-d7f4816a0b58\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.088871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48jqc\" (UniqueName: \"kubernetes.io/projected/eb45ce0a-9090-4553-b2b3-6d025d099f0f-kube-api-access-48jqc\") pod \"watcher-operator-controller-manager-78dbdc4d57-djf7x\" (UID: \"eb45ce0a-9090-4553-b2b3-6d025d099f0f\") " pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.109344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.131363 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.158691 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.159037 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4m4v\" (UniqueName: \"kubernetes.io/projected/02ecde71-15d7-4cf0-8928-505b2f0899fd-kube-api-access-r4m4v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vn6bc\" (UID: \"02ecde71-15d7-4cf0-8928-505b2f0899fd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.159101 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.159137 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z9s6\" (UniqueName: \"kubernetes.io/projected/08c96bfa-007d-41c5-a03a-4e92c9083c3f-kube-api-access-6z9s6\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.159850 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.159922 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:08.659898524 +0000 UTC m=+869.682436149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.160377 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.161006 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:08.66099358 +0000 UTC m=+869.683531205 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.195787 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.208217 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z9s6\" (UniqueName: \"kubernetes.io/projected/08c96bfa-007d-41c5-a03a-4e92c9083c3f-kube-api-access-6z9s6\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.208721 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4m4v\" (UniqueName: \"kubernetes.io/projected/02ecde71-15d7-4cf0-8928-505b2f0899fd-kube-api-access-r4m4v\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vn6bc\" (UID: \"02ecde71-15d7-4cf0-8928-505b2f0899fd\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.212128 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.222994 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.284395 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.351199 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.365743 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.368042 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.368106 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:09.368086197 +0000 UTC m=+870.390623822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.374507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.379304 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.387182 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.437146 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.502508 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" event={"ID":"7449c4dc-9594-459b-9e89-23cb5e86139b","Type":"ContainerStarted","Data":"71399356769b400d0e44d6801a73e75f4b0bb75e9b1008fc17e4d4e8354d062c"} Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.506970 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" event={"ID":"f27f9a1d-08bf-4576-90e8-0d5e9438b3d7","Type":"ContainerStarted","Data":"1edac849947b99b5f693cd452ed19b720f481bf6217aad08dcc4df5f46922b57"} Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.679881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.680023 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.680200 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.680258 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:09.680242342 +0000 UTC m=+870.702779967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.680840 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.680884 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:09.680873502 +0000 UTC m=+870.703411127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.690194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn"] Jan 23 13:47:08 crc kubenswrapper[4771]: I0123 13:47:08.781370 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.781587 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: E0123 13:47:08.781671 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:10.781649302 +0000 UTC m=+871.804186927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:08 crc kubenswrapper[4771]: W0123 13:47:08.798098 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3ff2047_0c52_4dee_a435_c88cb8c2690d.slice/crio-7b5ea34eaea31bf39594a9c7012d13ca9f0afac7ce27bddf8846766c227ab258 WatchSource:0}: Error finding container 7b5ea34eaea31bf39594a9c7012d13ca9f0afac7ce27bddf8846766c227ab258: Status 404 returned error can't find the container with id 7b5ea34eaea31bf39594a9c7012d13ca9f0afac7ce27bddf8846766c227ab258 Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.391195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.391831 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.391885 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:11.391870833 +0000 UTC m=+872.414408458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.486497 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.538966 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" event={"ID":"a6bb27ef-c367-4c44-9137-7e713f44271d","Type":"ContainerStarted","Data":"972a8cd1a4b1530f4ede19d5d536c8ec44db598ea5fa374ff9546ad5433384ec"} Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.539916 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" event={"ID":"e5c92a50-6224-413e-b4ca-9bdca838de01","Type":"ContainerStarted","Data":"4325f10b4b0bce8d44c5c171d2248ded013c103e3780b5c080d761892307eb9e"} Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.571275 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.598684 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" event={"ID":"a3ff2047-0c52-4dee-a435-c88cb8c2690d","Type":"ContainerStarted","Data":"7b5ea34eaea31bf39594a9c7012d13ca9f0afac7ce27bddf8846766c227ab258"} Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.630406 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.682211 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.701857 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.702004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.702999 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.703070 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:11.703049676 +0000 UTC m=+872.725587301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "webhook-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.703619 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: E0123 13:47:09.703684 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:11.703672597 +0000 UTC m=+872.726210222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.720320 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.811951 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9"] Jan 23 13:47:09 crc kubenswrapper[4771]: W0123 13:47:09.829446 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55997410_62dd_4965_938a_6e1cdfba0cd5.slice/crio-c04c42bdeb8eec9dcaa71e044709656a10379aaa26a48deb1f5c9d2cdcce0231 WatchSource:0}: Error finding container c04c42bdeb8eec9dcaa71e044709656a10379aaa26a48deb1f5c9d2cdcce0231: Status 404 returned error can't find the container with id c04c42bdeb8eec9dcaa71e044709656a10379aaa26a48deb1f5c9d2cdcce0231 Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.859242 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd"] Jan 23 13:47:09 crc kubenswrapper[4771]: W0123 13:47:09.861831 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd703eb08_df59_4676_a522_c869982a8772.slice/crio-b323c113a361666edcb095b2acf75db115ccfbd279829572e428de76c750ccc2 WatchSource:0}: Error finding container b323c113a361666edcb095b2acf75db115ccfbd279829572e428de76c750ccc2: Status 404 returned error can't find the container with id b323c113a361666edcb095b2acf75db115ccfbd279829572e428de76c750ccc2 Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.877345 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.944240 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r"] Jan 23 13:47:09 crc kubenswrapper[4771]: W0123 13:47:09.959351 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f36fa7_ebf2_406b_bb09_f3a83fd19685.slice/crio-62cbbe8fdefa04609568f8480e5289afd964c36804cedea2ea91a3877f5a583c WatchSource:0}: Error finding container 62cbbe8fdefa04609568f8480e5289afd964c36804cedea2ea91a3877f5a583c: Status 404 returned error can't find the container with id 62cbbe8fdefa04609568f8480e5289afd964c36804cedea2ea91a3877f5a583c Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.972022 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.983391 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt"] Jan 23 13:47:09 crc kubenswrapper[4771]: I0123 13:47:09.991821 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc"] Jan 23 13:47:10 crc kubenswrapper[4771]: W0123 13:47:10.001105 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02ecde71_15d7_4cf0_8928_505b2f0899fd.slice/crio-a1abf9798e211ec58527f61752cd7576be37823a21d33af5596dd73abe6294f5 WatchSource:0}: Error finding container a1abf9798e211ec58527f61752cd7576be37823a21d33af5596dd73abe6294f5: Status 404 returned error can't find the container with id a1abf9798e211ec58527f61752cd7576be37823a21d33af5596dd73abe6294f5 Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.012132 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x"] Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.035030 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4m4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vn6bc_openstack-operators(02ecde71-15d7-4cf0-8928-505b2f0899fd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.036637 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" podUID="02ecde71-15d7-4cf0-8928-505b2f0899fd" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.042720 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zwck8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-nm9jp_openstack-operators(6c48b549-8b0b-4914-be38-157d50994b3b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.042927 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dsmbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-5s27m_openstack-operators(763a3e24-643d-473f-bbb2-d7f4816a0b58): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.043795 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" podUID="6c48b549-8b0b-4914-be38-157d50994b3b" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.044002 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" podUID="763a3e24-643d-473f-bbb2-d7f4816a0b58" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.049813 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tc67m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-c9h8l_openstack-operators(ec9af354-7d56-47ee-aa2f-be57edf2c7bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.051504 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" podUID="ec9af354-7d56-47ee-aa2f-be57edf2c7bc" Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.098396 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m"] Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.110617 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp"] Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.610051 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" event={"ID":"763a3e24-643d-473f-bbb2-d7f4816a0b58","Type":"ContainerStarted","Data":"09200a81f97b7a57b4cf789be1869e4889aac5e11f2a6b0e594192d1d738ce90"} Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.613473 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" podUID="763a3e24-643d-473f-bbb2-d7f4816a0b58" Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.618931 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" event={"ID":"6ac8c39a-105d-4ab1-afd6-4786c9aa1386","Type":"ContainerStarted","Data":"050d4349578b40f6708b1f2e3c9dfe29f310716ad2f9e462aa88ed82787a0522"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.622533 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" event={"ID":"5d096797-513e-4b08-afe9-0c19eb099a3d","Type":"ContainerStarted","Data":"2d82b40b9656298c5909c35c6f017eae32bf65532a7c81a117718251baeb8ac4"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.628865 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" event={"ID":"06cccb54-4ed2-4ee7-af3e-c26532e49b23","Type":"ContainerStarted","Data":"1a3dc1007d1c762db4d493a652246f117e349e417224016292a28dd774bcaf62"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.634472 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" event={"ID":"ec9af354-7d56-47ee-aa2f-be57edf2c7bc","Type":"ContainerStarted","Data":"6d6294b086c659cbc1062d1a509686c1faa56c2e1ba8399e2526dab544aa6692"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.638319 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" event={"ID":"25f36fa7-ebf2-406b-bb09-f3a83fd19685","Type":"ContainerStarted","Data":"62cbbe8fdefa04609568f8480e5289afd964c36804cedea2ea91a3877f5a583c"} Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.638646 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" podUID="ec9af354-7d56-47ee-aa2f-be57edf2c7bc" Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.640245 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" event={"ID":"2e2e8c05-b33d-410f-ad27-e80ed0a243ee","Type":"ContainerStarted","Data":"a0b374638c375a6e70a96419d33476d9be3680789490eca4681d7df4d66d591e"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.642193 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" event={"ID":"eb45ce0a-9090-4553-b2b3-6d025d099f0f","Type":"ContainerStarted","Data":"8a6568b01123a0e776a3225f384c38419bb71fbb1865c2c37fdd49b39816295f"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.643716 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" event={"ID":"55997410-62dd-4965-938a-6e1cdfba0cd5","Type":"ContainerStarted","Data":"c04c42bdeb8eec9dcaa71e044709656a10379aaa26a48deb1f5c9d2cdcce0231"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.645526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" event={"ID":"8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570","Type":"ContainerStarted","Data":"188bfef078c53efd703ceb54f469463ecc8a7af35138cdf57207f0c72b5bcfff"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.649450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" event={"ID":"6c48b549-8b0b-4914-be38-157d50994b3b","Type":"ContainerStarted","Data":"470b71f3170aa60cbc5b50efc2d2d47f8858611b2bf8854059a917a256b47c7b"} Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.656156 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" podUID="6c48b549-8b0b-4914-be38-157d50994b3b" Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.659080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" event={"ID":"02c8537d-9470-4887-9fb9-0700448bbc40","Type":"ContainerStarted","Data":"d2a2210b22a12ecb7f20cafebd07bb0284376f7e84337f522fd9f66db22f30b2"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.660422 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" event={"ID":"d703eb08-df59-4676-a522-c869982a8772","Type":"ContainerStarted","Data":"b323c113a361666edcb095b2acf75db115ccfbd279829572e428de76c750ccc2"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.661453 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" event={"ID":"368370f7-de60-484e-8ad6-35d0298c2520","Type":"ContainerStarted","Data":"934c3f8a52ea554249bc72ece4d2c17ef3a51bf04482009951369e3b5434a4ba"} Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.664491 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" event={"ID":"02ecde71-15d7-4cf0-8928-505b2f0899fd","Type":"ContainerStarted","Data":"a1abf9798e211ec58527f61752cd7576be37823a21d33af5596dd73abe6294f5"} Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.670372 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" podUID="02ecde71-15d7-4cf0-8928-505b2f0899fd" Jan 23 13:47:10 crc kubenswrapper[4771]: I0123 13:47:10.824211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.824505 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:10 crc kubenswrapper[4771]: E0123 13:47:10.824561 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:14.824544686 +0000 UTC m=+875.847082311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: I0123 13:47:11.441029 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.441196 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.441255 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:15.441241492 +0000 UTC m=+876.463779117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.677084 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" podUID="02ecde71-15d7-4cf0-8928-505b2f0899fd" Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.679800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" podUID="6c48b549-8b0b-4914-be38-157d50994b3b" Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.680072 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" podUID="763a3e24-643d-473f-bbb2-d7f4816a0b58" Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.680073 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" podUID="ec9af354-7d56-47ee-aa2f-be57edf2c7bc" Jan 23 13:47:11 crc kubenswrapper[4771]: I0123 13:47:11.749875 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:11 crc kubenswrapper[4771]: I0123 13:47:11.750036 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.750655 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.750707 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:15.75068939 +0000 UTC m=+876.773227015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.751913 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 13:47:11 crc kubenswrapper[4771]: E0123 13:47:11.751971 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:15.75195504 +0000 UTC m=+876.774492685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "webhook-server-cert" not found Jan 23 13:47:14 crc kubenswrapper[4771]: I0123 13:47:14.920111 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:14 crc kubenswrapper[4771]: E0123 13:47:14.920391 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:14 crc kubenswrapper[4771]: E0123 13:47:14.920536 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:22.920513225 +0000 UTC m=+883.943050929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: I0123 13:47:15.529897 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.530114 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.530208 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:23.530187798 +0000 UTC m=+884.552725413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: I0123 13:47:15.835328 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:15 crc kubenswrapper[4771]: I0123 13:47:15.835463 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.835609 4771 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.835664 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:23.835649819 +0000 UTC m=+884.858187444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "webhook-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.835707 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:15 crc kubenswrapper[4771]: E0123 13:47:15.835728 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:23.835719941 +0000 UTC m=+884.858257556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:21 crc kubenswrapper[4771]: E0123 13:47:21.766503 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 23 13:47:21 crc kubenswrapper[4771]: E0123 13:47:21.767313 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdbzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-9pg25_openstack-operators(a6bb27ef-c367-4c44-9137-7e713f44271d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:21 crc kubenswrapper[4771]: E0123 13:47:21.768800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" podUID="a6bb27ef-c367-4c44-9137-7e713f44271d" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.688382 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.689034 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85rvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-xpp8r_openstack-operators(06cccb54-4ed2-4ee7-af3e-c26532e49b23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.690252 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" podUID="06cccb54-4ed2-4ee7-af3e-c26532e49b23" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.771979 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" podUID="a6bb27ef-c367-4c44-9137-7e713f44271d" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.775927 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" podUID="06cccb54-4ed2-4ee7-af3e-c26532e49b23" Jan 23 13:47:22 crc kubenswrapper[4771]: I0123 13:47:22.995404 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.995647 4771 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:22 crc kubenswrapper[4771]: E0123 13:47:22.995734 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert podName:b4af8681-84d1-4cf7-b3a6-b167146e1973 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:38.995711213 +0000 UTC m=+900.018248838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert") pod "infra-operator-controller-manager-694cf4f878-mxmqk" (UID: "b4af8681-84d1-4cf7-b3a6-b167146e1973") : secret "infra-operator-webhook-server-cert" not found Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.439487 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.439727 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krfps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-l2gvh_openstack-operators(5d096797-513e-4b08-afe9-0c19eb099a3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.440981 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" podUID="5d096797-513e-4b08-afe9-0c19eb099a3d" Jan 23 13:47:23 crc kubenswrapper[4771]: I0123 13:47:23.605762 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.605936 4771 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.605984 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert podName:4e79abf5-0755-4fec-998c-b4eba8ebe531 nodeName:}" failed. No retries permitted until 2026-01-23 13:47:39.605971235 +0000 UTC m=+900.628508860 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" (UID: "4e79abf5-0755-4fec-998c-b4eba8ebe531") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.779729 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" podUID="5d096797-513e-4b08-afe9-0c19eb099a3d" Jan 23 13:47:23 crc kubenswrapper[4771]: I0123 13:47:23.912185 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:23 crc kubenswrapper[4771]: I0123 13:47:23.912394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.912429 4771 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 13:47:23 crc kubenswrapper[4771]: E0123 13:47:23.912507 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs podName:08c96bfa-007d-41c5-a03a-4e92c9083c3f nodeName:}" failed. No retries permitted until 2026-01-23 13:47:39.912487661 +0000 UTC m=+900.935025286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs") pod "openstack-operator-controller-manager-68f54d99d8-mwsk5" (UID: "08c96bfa-007d-41c5-a03a-4e92c9083c3f") : secret "metrics-server-cert" not found Jan 23 13:47:23 crc kubenswrapper[4771]: I0123 13:47:23.940312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-webhook-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:24 crc kubenswrapper[4771]: E0123 13:47:24.226748 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 23 13:47:24 crc kubenswrapper[4771]: E0123 13:47:24.227560 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nbs8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-x7skn_openstack-operators(a3ff2047-0c52-4dee-a435-c88cb8c2690d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:24 crc kubenswrapper[4771]: E0123 13:47:24.229099 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" podUID="a3ff2047-0c52-4dee-a435-c88cb8c2690d" Jan 23 13:47:24 crc kubenswrapper[4771]: E0123 13:47:24.792299 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" podUID="a3ff2047-0c52-4dee-a435-c88cb8c2690d" Jan 23 13:47:25 crc kubenswrapper[4771]: E0123 13:47:25.584904 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 13:47:25 crc kubenswrapper[4771]: E0123 13:47:25.585219 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqx6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-7tzww_openstack-operators(368370f7-de60-484e-8ad6-35d0298c2520): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:25 crc kubenswrapper[4771]: E0123 13:47:25.586442 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" podUID="368370f7-de60-484e-8ad6-35d0298c2520" Jan 23 13:47:25 crc kubenswrapper[4771]: E0123 13:47:25.798139 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" podUID="368370f7-de60-484e-8ad6-35d0298c2520" Jan 23 13:47:26 crc kubenswrapper[4771]: E0123 13:47:26.308280 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 13:47:26 crc kubenswrapper[4771]: E0123 13:47:26.308556 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjf67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-89prd_openstack-operators(d703eb08-df59-4676-a522-c869982a8772): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:26 crc kubenswrapper[4771]: E0123 13:47:26.309800 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" podUID="d703eb08-df59-4676-a522-c869982a8772" Jan 23 13:47:26 crc kubenswrapper[4771]: E0123 13:47:26.809049 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" podUID="d703eb08-df59-4676-a522-c869982a8772" Jan 23 13:47:29 crc kubenswrapper[4771]: E0123 13:47:29.508724 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 13:47:29 crc kubenswrapper[4771]: E0123 13:47:29.509203 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcj25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-br5g2_openstack-operators(02c8537d-9470-4887-9fb9-0700448bbc40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:29 crc kubenswrapper[4771]: E0123 13:47:29.510374 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" podUID="02c8537d-9470-4887-9fb9-0700448bbc40" Jan 23 13:47:29 crc kubenswrapper[4771]: E0123 13:47:29.844739 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" podUID="02c8537d-9470-4887-9fb9-0700448bbc40" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.211315 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.211574 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-566lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-sx4v9_openstack-operators(55997410-62dd-4965-938a-6e1cdfba0cd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.212893 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" podUID="55997410-62dd-4965-938a-6e1cdfba0cd5" Jan 23 13:47:30 crc kubenswrapper[4771]: I0123 13:47:30.311655 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:47:30 crc kubenswrapper[4771]: I0123 13:47:30.311734 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.854344 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.855255 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9hdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn_openstack-operators(2e2e8c05-b33d-410f-ad27-e80ed0a243ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.855677 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" podUID="55997410-62dd-4965-938a-6e1cdfba0cd5" Jan 23 13:47:30 crc kubenswrapper[4771]: E0123 13:47:30.856537 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" podUID="2e2e8c05-b33d-410f-ad27-e80ed0a243ee" Jan 23 13:47:31 crc kubenswrapper[4771]: E0123 13:47:31.866576 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" podUID="2e2e8c05-b33d-410f-ad27-e80ed0a243ee" Jan 23 13:47:33 crc kubenswrapper[4771]: E0123 13:47:33.341785 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/openstack-k8s-operators/watcher-operator:a84a7bcd4fae88273999aa344ed1ae901949322b" Jan 23 13:47:33 crc kubenswrapper[4771]: E0123 13:47:33.341840 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/openstack-k8s-operators/watcher-operator:a84a7bcd4fae88273999aa344ed1ae901949322b" Jan 23 13:47:33 crc kubenswrapper[4771]: E0123 13:47:33.341999 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.240:5001/openstack-k8s-operators/watcher-operator:a84a7bcd4fae88273999aa344ed1ae901949322b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48jqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-78dbdc4d57-djf7x_openstack-operators(eb45ce0a-9090-4553-b2b3-6d025d099f0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:33 crc kubenswrapper[4771]: E0123 13:47:33.343177 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" podUID="eb45ce0a-9090-4553-b2b3-6d025d099f0f" Jan 23 13:47:33 crc kubenswrapper[4771]: E0123 13:47:33.885501 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/openstack-k8s-operators/watcher-operator:a84a7bcd4fae88273999aa344ed1ae901949322b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" podUID="eb45ce0a-9090-4553-b2b3-6d025d099f0f" Jan 23 13:47:35 crc kubenswrapper[4771]: E0123 13:47:35.589788 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 13:47:35 crc kubenswrapper[4771]: E0123 13:47:35.590015 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jl4wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-pvkv5_openstack-operators(8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:47:35 crc kubenswrapper[4771]: E0123 13:47:35.591463 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" podUID="8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570" Jan 23 13:47:35 crc kubenswrapper[4771]: E0123 13:47:35.939118 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" podUID="8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.915811 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" event={"ID":"6ac8c39a-105d-4ab1-afd6-4786c9aa1386","Type":"ContainerStarted","Data":"004fd0265b7bf0af088e1d064706dff8bd0b4c0935f160bbd7c68bb27f43a33b"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.917476 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.919861 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" event={"ID":"7449c4dc-9594-459b-9e89-23cb5e86139b","Type":"ContainerStarted","Data":"d3e4c8cb776191b7b025eb40a118e7cef813d1db0296e29adb5c149a1443ab02"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.920384 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.922007 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" event={"ID":"5d096797-513e-4b08-afe9-0c19eb099a3d","Type":"ContainerStarted","Data":"5e8d5198892a9c42b8acaca2293033ce38b99b42835fae49d325ddca4ef07ea5"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.922457 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.924826 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" event={"ID":"02ecde71-15d7-4cf0-8928-505b2f0899fd","Type":"ContainerStarted","Data":"673cc7927dfff74aef203f4bf68faef094cc7da8d201bf890911ca7de5256992"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.928606 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" event={"ID":"06cccb54-4ed2-4ee7-af3e-c26532e49b23","Type":"ContainerStarted","Data":"f7ac3d316237e7abc2963da78cbf371e0809b9b021f0462dd5779c9c6dc66666"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.929463 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.931262 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" event={"ID":"f27f9a1d-08bf-4576-90e8-0d5e9438b3d7","Type":"ContainerStarted","Data":"8148af0f5b97b167509a9f8479f0d67932b53cbb065fa98e0f9a590138fd1e04"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.931956 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.933439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" event={"ID":"a6bb27ef-c367-4c44-9137-7e713f44271d","Type":"ContainerStarted","Data":"6e78914727e6a8cfd8f885d2c11ce1c906d0c43173e8febd5f6e0dd4e53debab"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.933903 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.935519 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" event={"ID":"25f36fa7-ebf2-406b-bb09-f3a83fd19685","Type":"ContainerStarted","Data":"6d136a9d57ee43d0ba65e3c5a7e47578dca1aae2e69fba8de566fd6d51d65265"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.936080 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.937623 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" event={"ID":"e5c92a50-6224-413e-b4ca-9bdca838de01","Type":"ContainerStarted","Data":"ee3308cdbb293bab5c0b1632ee8d9094b8dfc142a5056e9e10a7ad116d4e1959"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.938159 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.939731 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" event={"ID":"6c48b549-8b0b-4914-be38-157d50994b3b","Type":"ContainerStarted","Data":"4efadf87c40fc632319a47db5a2b09dc50f7349a174316719d78fe50de934112"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.940317 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.942563 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" event={"ID":"763a3e24-643d-473f-bbb2-d7f4816a0b58","Type":"ContainerStarted","Data":"a10cb1f8b9bf9555cf52f6990e62b30378ab72d95c3494ad5f9401c8c21c21f6"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.942874 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.944306 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" event={"ID":"ec9af354-7d56-47ee-aa2f-be57edf2c7bc","Type":"ContainerStarted","Data":"01c9a7013876be46a0113f6a920e703389fd2c24f0ff6a9c6094e41d8c94880e"} Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.944542 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:36 crc kubenswrapper[4771]: I0123 13:47:36.958863 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" podStartSLOduration=5.99838824 podStartE2EDuration="29.958842251s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.913515852 +0000 UTC m=+870.936053477" lastFinishedPulling="2026-01-23 13:47:33.873969833 +0000 UTC m=+894.896507488" observedRunningTime="2026-01-23 13:47:36.947095608 +0000 UTC m=+897.969633253" watchObservedRunningTime="2026-01-23 13:47:36.958842251 +0000 UTC m=+897.981379876" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.022351 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" podStartSLOduration=4.343426496 podStartE2EDuration="30.022326224s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.042836782 +0000 UTC m=+871.065374407" lastFinishedPulling="2026-01-23 13:47:35.72173651 +0000 UTC m=+896.744274135" observedRunningTime="2026-01-23 13:47:37.017311375 +0000 UTC m=+898.039849010" watchObservedRunningTime="2026-01-23 13:47:37.022326224 +0000 UTC m=+898.044863849" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.066542 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" podStartSLOduration=4.346736371 podStartE2EDuration="30.066520352s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.001856196 +0000 UTC m=+871.024393811" lastFinishedPulling="2026-01-23 13:47:35.721640167 +0000 UTC m=+896.744177792" observedRunningTime="2026-01-23 13:47:37.059669363 +0000 UTC m=+898.082207008" watchObservedRunningTime="2026-01-23 13:47:37.066520352 +0000 UTC m=+898.089057997" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.150364 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" podStartSLOduration=5.762776146 podStartE2EDuration="31.150338562s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:08.486273033 +0000 UTC m=+869.508810678" lastFinishedPulling="2026-01-23 13:47:33.873835459 +0000 UTC m=+894.896373094" observedRunningTime="2026-01-23 13:47:37.134085875 +0000 UTC m=+898.156623500" watchObservedRunningTime="2026-01-23 13:47:37.150338562 +0000 UTC m=+898.172876187" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.215319 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" podStartSLOduration=5.691302119 podStartE2EDuration="31.215293732s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:08.350841498 +0000 UTC m=+869.373379123" lastFinishedPulling="2026-01-23 13:47:33.874833111 +0000 UTC m=+894.897370736" observedRunningTime="2026-01-23 13:47:37.166175566 +0000 UTC m=+898.188713211" watchObservedRunningTime="2026-01-23 13:47:37.215293732 +0000 UTC m=+898.237831357" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.226147 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" podStartSLOduration=4.546942769 podStartE2EDuration="30.226121506s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.042528312 +0000 UTC m=+871.065065937" lastFinishedPulling="2026-01-23 13:47:35.721707049 +0000 UTC m=+896.744244674" observedRunningTime="2026-01-23 13:47:37.207972658 +0000 UTC m=+898.230510293" watchObservedRunningTime="2026-01-23 13:47:37.226121506 +0000 UTC m=+898.248659131" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.265358 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vn6bc" podStartSLOduration=4.586348896 podStartE2EDuration="30.265331566s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.034826397 +0000 UTC m=+871.057364022" lastFinishedPulling="2026-01-23 13:47:35.713809067 +0000 UTC m=+896.736346692" observedRunningTime="2026-01-23 13:47:37.25573267 +0000 UTC m=+898.278270295" watchObservedRunningTime="2026-01-23 13:47:37.265331566 +0000 UTC m=+898.287869191" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.282811 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" podStartSLOduration=4.066306359 podStartE2EDuration="31.282785422s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:08.505191975 +0000 UTC m=+869.527729600" lastFinishedPulling="2026-01-23 13:47:35.721671038 +0000 UTC m=+896.744208663" observedRunningTime="2026-01-23 13:47:37.273163995 +0000 UTC m=+898.295701640" watchObservedRunningTime="2026-01-23 13:47:37.282785422 +0000 UTC m=+898.305323047" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.305656 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" podStartSLOduration=4.633485317 podStartE2EDuration="30.3056322s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.049514384 +0000 UTC m=+871.072052009" lastFinishedPulling="2026-01-23 13:47:35.721661267 +0000 UTC m=+896.744198892" observedRunningTime="2026-01-23 13:47:37.304789603 +0000 UTC m=+898.327327238" watchObservedRunningTime="2026-01-23 13:47:37.3056322 +0000 UTC m=+898.328169815" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.355131 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" podStartSLOduration=5.982073773 podStartE2EDuration="31.355099446s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:08.500794766 +0000 UTC m=+869.523332411" lastFinishedPulling="2026-01-23 13:47:33.873820469 +0000 UTC m=+894.896358084" observedRunningTime="2026-01-23 13:47:37.347758892 +0000 UTC m=+898.370296517" watchObservedRunningTime="2026-01-23 13:47:37.355099446 +0000 UTC m=+898.377637071" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.373669 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" podStartSLOduration=6.462757473 podStartE2EDuration="30.373647436s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.962993948 +0000 UTC m=+870.985531573" lastFinishedPulling="2026-01-23 13:47:33.873883891 +0000 UTC m=+894.896421536" observedRunningTime="2026-01-23 13:47:37.368710739 +0000 UTC m=+898.391248364" watchObservedRunningTime="2026-01-23 13:47:37.373647436 +0000 UTC m=+898.396185061" Jan 23 13:47:37 crc kubenswrapper[4771]: I0123 13:47:37.396110 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" podStartSLOduration=5.347275868 podStartE2EDuration="31.396088231s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.673832896 +0000 UTC m=+870.696370521" lastFinishedPulling="2026-01-23 13:47:35.722645239 +0000 UTC m=+896.745182884" observedRunningTime="2026-01-23 13:47:37.394339466 +0000 UTC m=+898.416877111" watchObservedRunningTime="2026-01-23 13:47:37.396088231 +0000 UTC m=+898.418625856" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.035869 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.055158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b4af8681-84d1-4cf7-b3a6-b167146e1973-cert\") pod \"infra-operator-controller-manager-694cf4f878-mxmqk\" (UID: \"b4af8681-84d1-4cf7-b3a6-b167146e1973\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.303001 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-72w5z" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.310662 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.644789 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.651221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4e79abf5-0755-4fec-998c-b4eba8ebe531-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr\" (UID: \"4e79abf5-0755-4fec-998c-b4eba8ebe531\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.823138 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk"] Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.888905 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-7c87h" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.897465 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.949340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.956067 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/08c96bfa-007d-41c5-a03a-4e92c9083c3f-metrics-certs\") pod \"openstack-operator-controller-manager-68f54d99d8-mwsk5\" (UID: \"08c96bfa-007d-41c5-a03a-4e92c9083c3f\") " pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.975794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" event={"ID":"b4af8681-84d1-4cf7-b3a6-b167146e1973","Type":"ContainerStarted","Data":"a2999bccd8327d1c2ad2bec741effc7308431baa2d172f9bbb1e4a69286a9563"} Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.979025 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" event={"ID":"368370f7-de60-484e-8ad6-35d0298c2520","Type":"ContainerStarted","Data":"76e6c462e2bc79102ab2f2d31f9557b35d6b6800f258f035d428242f81e84926"} Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.979299 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:39 crc kubenswrapper[4771]: I0123 13:47:39.993651 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" podStartSLOduration=3.968617716 podStartE2EDuration="33.993628883s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.673801575 +0000 UTC m=+870.696339200" lastFinishedPulling="2026-01-23 13:47:39.698812742 +0000 UTC m=+900.721350367" observedRunningTime="2026-01-23 13:47:39.991853567 +0000 UTC m=+901.014391192" watchObservedRunningTime="2026-01-23 13:47:39.993628883 +0000 UTC m=+901.016166508" Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.072712 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8rgxx" Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.076331 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.188363 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr"] Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.607136 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5"] Jan 23 13:47:40 crc kubenswrapper[4771]: W0123 13:47:40.627709 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08c96bfa_007d_41c5_a03a_4e92c9083c3f.slice/crio-3b03e005332632fda3a8be281b8355747d5e71872186c4a2751565a29a1511f9 WatchSource:0}: Error finding container 3b03e005332632fda3a8be281b8355747d5e71872186c4a2751565a29a1511f9: Status 404 returned error can't find the container with id 3b03e005332632fda3a8be281b8355747d5e71872186c4a2751565a29a1511f9 Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.995155 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" event={"ID":"08c96bfa-007d-41c5-a03a-4e92c9083c3f","Type":"ContainerStarted","Data":"d26c3c143bacab5dfcd46fe1f92b2fb562d5672d178913bd16dede84002ed4e3"} Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.995633 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.995651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" event={"ID":"08c96bfa-007d-41c5-a03a-4e92c9083c3f","Type":"ContainerStarted","Data":"3b03e005332632fda3a8be281b8355747d5e71872186c4a2751565a29a1511f9"} Jan 23 13:47:40 crc kubenswrapper[4771]: I0123 13:47:40.998964 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" event={"ID":"4e79abf5-0755-4fec-998c-b4eba8ebe531","Type":"ContainerStarted","Data":"c57878d13ae80f7e7008333f36e6dc05c9b9ca9eb11ea9015e33da7f270215d0"} Jan 23 13:47:41 crc kubenswrapper[4771]: I0123 13:47:41.006093 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" event={"ID":"a3ff2047-0c52-4dee-a435-c88cb8c2690d","Type":"ContainerStarted","Data":"b100d3fb8817ff2d196dfdd66fc6a018c2ab88f93ac045bc7f8a117857d80581"} Jan 23 13:47:41 crc kubenswrapper[4771]: I0123 13:47:41.006565 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:41 crc kubenswrapper[4771]: I0123 13:47:41.034908 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" podStartSLOduration=34.034891847 podStartE2EDuration="34.034891847s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:47:41.033067648 +0000 UTC m=+902.055605273" watchObservedRunningTime="2026-01-23 13:47:41.034891847 +0000 UTC m=+902.057429472" Jan 23 13:47:42 crc kubenswrapper[4771]: I0123 13:47:42.019285 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" event={"ID":"d703eb08-df59-4676-a522-c869982a8772","Type":"ContainerStarted","Data":"37aaa2c3e5da209f1914ad57297660af13b745d5af63c38070aaaf588b92f410"} Jan 23 13:47:42 crc kubenswrapper[4771]: I0123 13:47:42.020370 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:42 crc kubenswrapper[4771]: I0123 13:47:42.035983 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" podStartSLOduration=4.883830093 podStartE2EDuration="36.035958088s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:08.8013659 +0000 UTC m=+869.823903525" lastFinishedPulling="2026-01-23 13:47:39.953493895 +0000 UTC m=+900.976031520" observedRunningTime="2026-01-23 13:47:41.078454314 +0000 UTC m=+902.100991959" watchObservedRunningTime="2026-01-23 13:47:42.035958088 +0000 UTC m=+903.058495723" Jan 23 13:47:42 crc kubenswrapper[4771]: I0123 13:47:42.036656 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" podStartSLOduration=4.06894571 podStartE2EDuration="35.03664943s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.873586619 +0000 UTC m=+870.896124234" lastFinishedPulling="2026-01-23 13:47:40.841290339 +0000 UTC m=+901.863827954" observedRunningTime="2026-01-23 13:47:42.035190853 +0000 UTC m=+903.057728498" watchObservedRunningTime="2026-01-23 13:47:42.03664943 +0000 UTC m=+903.059187055" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.050871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" event={"ID":"b4af8681-84d1-4cf7-b3a6-b167146e1973","Type":"ContainerStarted","Data":"0d22b685d6112b9490b4b7b669ec55a3b6db38fc28cd2135016a687ad46129b7"} Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.051620 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.053750 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" event={"ID":"4e79abf5-0755-4fec-998c-b4eba8ebe531","Type":"ContainerStarted","Data":"67d85fbc68c12a8c2b09a46a4e67a1cf7da53265860920d0645a6d2649cb7408"} Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.053905 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.055383 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" event={"ID":"02c8537d-9470-4887-9fb9-0700448bbc40","Type":"ContainerStarted","Data":"57a8847a80579e70f497efa75cf464f654600f90eeec50e36e01ae4e9abe1e7f"} Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.055764 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.057851 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" event={"ID":"2e2e8c05-b33d-410f-ad27-e80ed0a243ee","Type":"ContainerStarted","Data":"03797fc448ca77b275784ff8a42d530cf159b942024397c041ca6183f5534884"} Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.058278 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.059678 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" event={"ID":"55997410-62dd-4965-938a-6e1cdfba0cd5","Type":"ContainerStarted","Data":"1506f4520ce9982950b9f8e403002953692fbea67da9c3ce58e430549456d121"} Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.060137 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.080838 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" podStartSLOduration=34.566153173000004 podStartE2EDuration="40.080820729s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:39.842232061 +0000 UTC m=+900.864769686" lastFinishedPulling="2026-01-23 13:47:45.356899617 +0000 UTC m=+906.379437242" observedRunningTime="2026-01-23 13:47:46.077382139 +0000 UTC m=+907.099919774" watchObservedRunningTime="2026-01-23 13:47:46.080820729 +0000 UTC m=+907.103358354" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.143870 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" podStartSLOduration=34.003046661 podStartE2EDuration="39.143839697s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:40.216637348 +0000 UTC m=+901.239174973" lastFinishedPulling="2026-01-23 13:47:45.357430384 +0000 UTC m=+906.379968009" observedRunningTime="2026-01-23 13:47:46.13358847 +0000 UTC m=+907.156126115" watchObservedRunningTime="2026-01-23 13:47:46.143839697 +0000 UTC m=+907.166377322" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.194326 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" podStartSLOduration=3.674054081 podStartE2EDuration="39.194304985s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.839734011 +0000 UTC m=+870.862271636" lastFinishedPulling="2026-01-23 13:47:45.359984915 +0000 UTC m=+906.382522540" observedRunningTime="2026-01-23 13:47:46.186138365 +0000 UTC m=+907.208676000" watchObservedRunningTime="2026-01-23 13:47:46.194304985 +0000 UTC m=+907.216842610" Jan 23 13:47:46 crc kubenswrapper[4771]: I0123 13:47:46.194692 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" podStartSLOduration=4.488435446 podStartE2EDuration="40.194683206s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.650892355 +0000 UTC m=+870.673429980" lastFinishedPulling="2026-01-23 13:47:45.357140115 +0000 UTC m=+906.379677740" observedRunningTime="2026-01-23 13:47:46.159463725 +0000 UTC m=+907.182001370" watchObservedRunningTime="2026-01-23 13:47:46.194683206 +0000 UTC m=+907.217220841" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.062692 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9pg25" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.076786 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-p4s4m" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.090744 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-qtsk2" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.096527 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" podStartSLOduration=5.474604226 podStartE2EDuration="41.096503907s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.735604094 +0000 UTC m=+870.758141709" lastFinishedPulling="2026-01-23 13:47:45.357503765 +0000 UTC m=+906.380041390" observedRunningTime="2026-01-23 13:47:46.219803177 +0000 UTC m=+907.242340812" watchObservedRunningTime="2026-01-23 13:47:47.096503907 +0000 UTC m=+908.119041532" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.150788 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-g7vwj" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.412073 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-x7skn" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.528850 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-l2gvh" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.674146 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-7tzww" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.958160 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-c9h8l" Jan 23 13:47:47 crc kubenswrapper[4771]: I0123 13:47:47.995015 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-xpp8r" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.031853 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-89prd" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.080263 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" event={"ID":"8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570","Type":"ContainerStarted","Data":"bc7682f7ef297098e1353feec86b93e0dc57f3b3c0a478bbb8d12c38b0012747"} Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.080690 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.120865 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" podStartSLOduration=4.081913566 podStartE2EDuration="42.12083799s" podCreationTimestamp="2026-01-23 13:47:06 +0000 UTC" firstStartedPulling="2026-01-23 13:47:09.673622539 +0000 UTC m=+870.696160164" lastFinishedPulling="2026-01-23 13:47:47.712546963 +0000 UTC m=+908.735084588" observedRunningTime="2026-01-23 13:47:48.10514308 +0000 UTC m=+909.127680705" watchObservedRunningTime="2026-01-23 13:47:48.12083799 +0000 UTC m=+909.143375615" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.130892 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-dgqzc" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.217266 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m76mt" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.226305 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-nm9jp" Jan 23 13:47:48 crc kubenswrapper[4771]: I0123 13:47:48.378929 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-5s27m" Jan 23 13:47:50 crc kubenswrapper[4771]: I0123 13:47:50.084928 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68f54d99d8-mwsk5" Jan 23 13:47:57 crc kubenswrapper[4771]: I0123 13:47:57.469312 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-br5g2" Jan 23 13:47:57 crc kubenswrapper[4771]: I0123 13:47:57.729387 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pvkv5" Jan 23 13:47:57 crc kubenswrapper[4771]: I0123 13:47:57.842833 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn" Jan 23 13:47:58 crc kubenswrapper[4771]: I0123 13:47:58.041689 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-sx4v9" Jan 23 13:47:59 crc kubenswrapper[4771]: I0123 13:47:59.317674 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mxmqk" Jan 23 13:47:59 crc kubenswrapper[4771]: I0123 13:47:59.908128 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr" Jan 23 13:48:00 crc kubenswrapper[4771]: I0123 13:48:00.311773 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:48:00 crc kubenswrapper[4771]: I0123 13:48:00.311855 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:48:04 crc kubenswrapper[4771]: I0123 13:48:04.205096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" event={"ID":"eb45ce0a-9090-4553-b2b3-6d025d099f0f","Type":"ContainerStarted","Data":"9742e30889e42c46224b6b8bdc698bf41112d42d169cf8110cf21fa8c67e6461"} Jan 23 13:48:04 crc kubenswrapper[4771]: I0123 13:48:04.205863 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:48:04 crc kubenswrapper[4771]: I0123 13:48:04.225467 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" podStartSLOduration=3.460147524 podStartE2EDuration="57.225438902s" podCreationTimestamp="2026-01-23 13:47:07 +0000 UTC" firstStartedPulling="2026-01-23 13:47:10.03053925 +0000 UTC m=+871.053076875" lastFinishedPulling="2026-01-23 13:48:03.795830628 +0000 UTC m=+924.818368253" observedRunningTime="2026-01-23 13:48:04.218478935 +0000 UTC m=+925.241016570" watchObservedRunningTime="2026-01-23 13:48:04.225438902 +0000 UTC m=+925.247976537" Jan 23 13:48:08 crc kubenswrapper[4771]: I0123 13:48:08.135486 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-78dbdc4d57-djf7x" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.733157 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.735145 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.760025 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.926331 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.926591 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.926675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927m7\" (UniqueName: \"kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.977501 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.979262 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.982296 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gjn2l" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.982739 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.982916 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.987765 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:48:26 crc kubenswrapper[4771]: I0123 13:48:26.989320 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.030954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-927m7\" (UniqueName: \"kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.031052 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.031105 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.031734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.032640 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.056351 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-927m7\" (UniqueName: \"kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7\") pod \"community-operators-q542n\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.082875 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.084376 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.091751 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.103431 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.131921 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lts\" (UniqueName: \"kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.132006 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.233259 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwn86\" (UniqueName: \"kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.233319 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lts\" (UniqueName: \"kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.233447 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.233504 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.233525 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.235048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.260689 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lts\" (UniqueName: \"kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts\") pod \"dnsmasq-dns-bbfdc9b97-87fwq\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.337896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwn86\" (UniqueName: \"kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.338042 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.338064 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.338859 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.340294 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.349061 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.351500 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.366172 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwn86\" (UniqueName: \"kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86\") pod \"dnsmasq-dns-c5cd96d89-drjcn\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.502503 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:48:27 crc kubenswrapper[4771]: I0123 13:48:27.954122 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:48:28 crc kubenswrapper[4771]: I0123 13:48:28.121205 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:48:28 crc kubenswrapper[4771]: I0123 13:48:28.403034 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:48:28 crc kubenswrapper[4771]: I0123 13:48:28.407567 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" event={"ID":"c19fc401-2298-4b38-9f40-0d6fa490445d","Type":"ContainerStarted","Data":"bf9be56a07ed8679c430e489619fe9327bbd146296f8a5c63ea2b92aa239e21f"} Jan 23 13:48:28 crc kubenswrapper[4771]: I0123 13:48:28.408676 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" event={"ID":"7f9447a5-17a1-4b31-b96c-26fedbb30f47","Type":"ContainerStarted","Data":"54305f24e288c5c231d6896553a988ad36cd6b3c30765fc79aa2593cfbd9d300"} Jan 23 13:48:28 crc kubenswrapper[4771]: W0123 13:48:28.411680 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2be0c2bb_124a_4f4f_aec3_29edfaaaf554.slice/crio-0ce9ccf09e649690d56aadedad9d714601c2057d57647b78f58f6f817f2431e2 WatchSource:0}: Error finding container 0ce9ccf09e649690d56aadedad9d714601c2057d57647b78f58f6f817f2431e2: Status 404 returned error can't find the container with id 0ce9ccf09e649690d56aadedad9d714601c2057d57647b78f58f6f817f2431e2 Jan 23 13:48:29 crc kubenswrapper[4771]: I0123 13:48:29.427513 4771 generic.go:334] "Generic (PLEG): container finished" podID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerID="631f6ef7d591321bc5a8ab8152b9e559bd06f21b0fb6691fb7625e006a05255d" exitCode=0 Jan 23 13:48:29 crc kubenswrapper[4771]: I0123 13:48:29.427889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerDied","Data":"631f6ef7d591321bc5a8ab8152b9e559bd06f21b0fb6691fb7625e006a05255d"} Jan 23 13:48:29 crc kubenswrapper[4771]: I0123 13:48:29.427925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerStarted","Data":"0ce9ccf09e649690d56aadedad9d714601c2057d57647b78f58f6f817f2431e2"} Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.311428 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.311476 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.311517 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.312178 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.312248 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d" gracePeriod=600 Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.449900 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d" exitCode=0 Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.449943 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d"} Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.450000 4771 scope.go:117] "RemoveContainer" containerID="7197a1f46f7fe0a055c3cd1d599823ec1a0bf6cce8f38ed1e420f676015408ef" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.869835 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.904417 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.906643 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.915476 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.988735 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq68t\" (UniqueName: \"kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.988820 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:30 crc kubenswrapper[4771]: I0123 13:48:30.988859 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.090346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq68t\" (UniqueName: \"kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.090434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.090459 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.091510 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.119368 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.121679 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq68t\" (UniqueName: \"kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t\") pod \"dnsmasq-dns-66f4d755d5-ksg8n\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.241520 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.491340 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerStarted","Data":"f0f455ec790fb4e58daeb84a535074e13890c53a3760ea0b93161d967910a3b4"} Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.491858 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.499361 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43"} Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.620518 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.622132 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.634023 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.713159 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6qr\" (UniqueName: \"kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.713234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.713297 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.839705 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.840140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn6qr\" (UniqueName: \"kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.840199 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.843182 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.843922 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.918991 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn6qr\" (UniqueName: \"kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr\") pod \"dnsmasq-dns-5cf864db9c-rdpn7\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:31 crc kubenswrapper[4771]: I0123 13:48:31.971276 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.120661 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.125849 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.136484 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.142704 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.143049 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpt4p" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.143279 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.143493 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.144587 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.144754 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.146169 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.157916 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.205916 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.209205 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.214984 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.247878 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248242 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248266 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248299 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248343 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44tw\" (UniqueName: \"kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248383 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248400 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8qg\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248459 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248480 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248549 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.248575 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357272 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357330 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357387 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357417 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357468 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357501 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357524 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357537 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357556 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k44tw\" (UniqueName: \"kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357594 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.357610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8qg\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.359280 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.359498 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.366250 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.366582 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.370464 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.375478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.375989 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.385747 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.391942 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.392300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.396682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.398027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.405265 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k44tw\" (UniqueName: \"kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw\") pod \"dnsmasq-dns-54ddbcd685-tc9ls\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.456629 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8qg\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.515273 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.536520 4771 generic.go:334] "Generic (PLEG): container finished" podID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerID="f0f455ec790fb4e58daeb84a535074e13890c53a3760ea0b93161d967910a3b4" exitCode=0 Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.545941 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerDied","Data":"f0f455ec790fb4e58daeb84a535074e13890c53a3760ea0b93161d967910a3b4"} Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.546041 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.674088 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.680773 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.749966 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.751844 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.767756 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.768058 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.768305 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.769991 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8zg7b" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.770016 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.770138 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.772844 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.803969 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.912162 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:48:32 crc kubenswrapper[4771]: W0123 13:48:32.968634 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31590df7_b974_4f61_8530_5713c2f887c2.slice/crio-9182554cbec0d38b2d2ad663c01d90c613cfae4cb3f7776360c11e65979b6a78 WatchSource:0}: Error finding container 9182554cbec0d38b2d2ad663c01d90c613cfae4cb3f7776360c11e65979b6a78: Status 404 returned error can't find the container with id 9182554cbec0d38b2d2ad663c01d90c613cfae4cb3f7776360c11e65979b6a78 Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994156 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47df\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994221 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994245 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994297 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994312 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994401 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:32 crc kubenswrapper[4771]: I0123 13:48:32.994467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095760 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r47df\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095927 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.095995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096019 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096074 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096705 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.096911 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.097353 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.097400 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.097700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.101346 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.102466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.128107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.132494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.133118 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.133949 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.200391 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r47df\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.214842 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.432204 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.438210 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.440398 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.442583 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-2fwwr" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.442820 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.443039 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.443154 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.443272 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.443466 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.444070 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.455955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.510946 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.513537 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/add41260-19c8-4989-a0a9-97a93316c6e8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.513684 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.513812 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.513960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514102 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514237 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/add41260-19c8-4989-a0a9-97a93316c6e8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514350 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514596 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514709 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pgw\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-kube-api-access-x4pgw\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.514805 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.551531 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.601617 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" event={"ID":"31590df7-b974-4f61-8530-5713c2f887c2","Type":"ContainerStarted","Data":"9182554cbec0d38b2d2ad663c01d90c613cfae4cb3f7776360c11e65979b6a78"} Jan 23 13:48:33 crc kubenswrapper[4771]: W0123 13:48:33.602778 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c3f2be4_082b_4eb5_88d6_2b069d2dd361.slice/crio-96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c WatchSource:0}: Error finding container 96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c: Status 404 returned error can't find the container with id 96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.607689 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" event={"ID":"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7","Type":"ContainerStarted","Data":"0053da20c0d9e2719bb861ff0ec34c5e613cf2168e32b21b3b3149c30fa133a8"} Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.618567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4pgw\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-kube-api-access-x4pgw\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.618628 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.618715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.620482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628318 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/add41260-19c8-4989-a0a9-97a93316c6e8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628386 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628461 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628614 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628708 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/add41260-19c8-4989-a0a9-97a93316c6e8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.628752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.629849 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.630026 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.630085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.631281 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/add41260-19c8-4989-a0a9-97a93316c6e8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.631527 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.632910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.633098 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/add41260-19c8-4989-a0a9-97a93316c6e8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.634188 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.635338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/add41260-19c8-4989-a0a9-97a93316c6e8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.647808 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4pgw\" (UniqueName: \"kubernetes.io/projected/add41260-19c8-4989-a0a9-97a93316c6e8-kube-api-access-x4pgw\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.681003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"add41260-19c8-4989-a0a9-97a93316c6e8\") " pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: W0123 13:48:33.721713 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea41385_4d73_47af_94c4_9c9babe781d2.slice/crio-890de292640aacc5b796ab00760b51b0e5b1287f98eb73110efe9455dcdd16cb WatchSource:0}: Error finding container 890de292640aacc5b796ab00760b51b0e5b1287f98eb73110efe9455dcdd16cb: Status 404 returned error can't find the container with id 890de292640aacc5b796ab00760b51b0e5b1287f98eb73110efe9455dcdd16cb Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.734489 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.811469 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.969976 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.971646 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.985798 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.986082 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.986760 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hl2fd" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.986826 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 13:48:33 crc kubenswrapper[4771]: I0123 13:48:33.995792 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.051456 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.084510 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.145682 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-default\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146150 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxlqc\" (UniqueName: \"kubernetes.io/projected/34159c2a-f5ad-4b4c-a1c6-556001c43134-kube-api-access-xxlqc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146205 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-kolla-config\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-operator-scripts\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146275 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-generated\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146335 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146353 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.146387 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-kolla-config\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-operator-scripts\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-generated\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250575 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250598 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250625 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-default\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.250720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxlqc\" (UniqueName: \"kubernetes.io/projected/34159c2a-f5ad-4b4c-a1c6-556001c43134-kube-api-access-xxlqc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.251884 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-kolla-config\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.254855 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.255065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-generated\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.261690 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-operator-scripts\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.274003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.274288 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/34159c2a-f5ad-4b4c-a1c6-556001c43134-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.274713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/34159c2a-f5ad-4b4c-a1c6-556001c43134-config-data-default\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.289126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxlqc\" (UniqueName: \"kubernetes.io/projected/34159c2a-f5ad-4b4c-a1c6-556001c43134-kube-api-access-xxlqc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.321013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"34159c2a-f5ad-4b4c-a1c6-556001c43134\") " pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.614378 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.620551 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.667691 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerStarted","Data":"82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2"} Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.679383 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerStarted","Data":"96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c"} Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.697695 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q542n" podStartSLOduration=5.038121076 podStartE2EDuration="8.697674267s" podCreationTimestamp="2026-01-23 13:48:26 +0000 UTC" firstStartedPulling="2026-01-23 13:48:29.431794162 +0000 UTC m=+950.454331797" lastFinishedPulling="2026-01-23 13:48:33.091347363 +0000 UTC m=+954.113884988" observedRunningTime="2026-01-23 13:48:34.697067109 +0000 UTC m=+955.719604754" watchObservedRunningTime="2026-01-23 13:48:34.697674267 +0000 UTC m=+955.720211912" Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.705839 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerStarted","Data":"4a6fb67f7ffd345c55595251bf883000c533b801ed6a4b26f81be5bc4069b4dc"} Jan 23 13:48:34 crc kubenswrapper[4771]: I0123 13:48:34.714847 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" event={"ID":"bea41385-4d73-47af-94c4-9c9babe781d2","Type":"ContainerStarted","Data":"890de292640aacc5b796ab00760b51b0e5b1287f98eb73110efe9455dcdd16cb"} Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.336009 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.338372 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.343755 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dsdpx" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.347993 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.348245 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.363000 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.383790 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500016 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500823 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph8tw\" (UniqueName: \"kubernetes.io/projected/90863ead-98c1-4258-b980-919471f6d76c-kube-api-access-ph8tw\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90863ead-98c1-4258-b980-919471f6d76c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.500971 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.501015 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.501038 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602723 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602807 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602930 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.602963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph8tw\" (UniqueName: \"kubernetes.io/projected/90863ead-98c1-4258-b980-919471f6d76c-kube-api-access-ph8tw\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.603003 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.603029 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90863ead-98c1-4258-b980-919471f6d76c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.604064 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/90863ead-98c1-4258-b980-919471f6d76c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.604936 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.605434 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.605454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.606289 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/90863ead-98c1-4258-b980-919471f6d76c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.612305 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.624968 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/90863ead-98c1-4258-b980-919471f6d76c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.645066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph8tw\" (UniqueName: \"kubernetes.io/projected/90863ead-98c1-4258-b980-919471f6d76c-kube-api-access-ph8tw\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.650682 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"90863ead-98c1-4258-b980-919471f6d76c\") " pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.682837 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.793694 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"add41260-19c8-4989-a0a9-97a93316c6e8","Type":"ContainerStarted","Data":"e3616c8bde4f8198e31226754e63d92c1ce814b3f1c80766d8dc24ee8e7a41bf"} Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.802574 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.803799 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.808715 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-tzrq5" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.808888 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.822289 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.834152 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.940460 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.940587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.940643 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-config-data\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.940693 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-kolla-config\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:35 crc kubenswrapper[4771]: I0123 13:48:35.940746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8rd\" (UniqueName: \"kubernetes.io/projected/972f2298-461d-46ec-a00a-19ea21a500a5-kube-api-access-sm8rd\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.042443 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-kolla-config\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.042536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8rd\" (UniqueName: \"kubernetes.io/projected/972f2298-461d-46ec-a00a-19ea21a500a5-kube-api-access-sm8rd\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.042561 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.042954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.043004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-config-data\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.043714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-config-data\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.043903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/972f2298-461d-46ec-a00a-19ea21a500a5-kolla-config\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.063958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-combined-ca-bundle\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.065238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/972f2298-461d-46ec-a00a-19ea21a500a5-memcached-tls-certs\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.065870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8rd\" (UniqueName: \"kubernetes.io/projected/972f2298-461d-46ec-a00a-19ea21a500a5-kube-api-access-sm8rd\") pod \"memcached-0\" (UID: \"972f2298-461d-46ec-a00a-19ea21a500a5\") " pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.175462 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.586538 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.628636 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.804987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"34159c2a-f5ad-4b4c-a1c6-556001c43134","Type":"ContainerStarted","Data":"5f0eb3c9b43f982dd9cd08ec0e0b1fde995908b9d92d29cab26d5b93e1ba6fc1"} Jan 23 13:48:36 crc kubenswrapper[4771]: I0123 13:48:36.808434 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"90863ead-98c1-4258-b980-919471f6d76c","Type":"ContainerStarted","Data":"50bb45eb00a94219d9bf7ad1eb56e6f60db1e57fdc1354900b004f4b3ae1f16a"} Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.024547 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 13:48:37 crc kubenswrapper[4771]: W0123 13:48:37.041268 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod972f2298_461d_46ec_a00a_19ea21a500a5.slice/crio-408256bdbed5915c22e787ccda10f9b00f8fb450e40927822cfbd619970a3e14 WatchSource:0}: Error finding container 408256bdbed5915c22e787ccda10f9b00f8fb450e40927822cfbd619970a3e14: Status 404 returned error can't find the container with id 408256bdbed5915c22e787ccda10f9b00f8fb450e40927822cfbd619970a3e14 Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.351877 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.351943 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.437663 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.572217 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.577303 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.591480 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g8sqn" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.615804 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.718814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2ww\" (UniqueName: \"kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww\") pod \"kube-state-metrics-0\" (UID: \"b4fa8367-bad7-4681-93a1-835923d93421\") " pod="openstack/kube-state-metrics-0" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.826901 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2ww\" (UniqueName: \"kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww\") pod \"kube-state-metrics-0\" (UID: \"b4fa8367-bad7-4681-93a1-835923d93421\") " pod="openstack/kube-state-metrics-0" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.836798 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"972f2298-461d-46ec-a00a-19ea21a500a5","Type":"ContainerStarted","Data":"408256bdbed5915c22e787ccda10f9b00f8fb450e40927822cfbd619970a3e14"} Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.898817 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2ww\" (UniqueName: \"kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww\") pod \"kube-state-metrics-0\" (UID: \"b4fa8367-bad7-4681-93a1-835923d93421\") " pod="openstack/kube-state-metrics-0" Jan 23 13:48:37 crc kubenswrapper[4771]: I0123 13:48:37.951487 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:48:38 crc kubenswrapper[4771]: I0123 13:48:38.747754 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:48:38 crc kubenswrapper[4771]: I0123 13:48:38.891420 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4fa8367-bad7-4681-93a1-835923d93421","Type":"ContainerStarted","Data":"27e1804d80ac3542c96eef4969e1b8e395197254ec54c931eae86939ac153833"} Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.036619 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.040606 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048065 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qkchd" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048333 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048537 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048733 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048781 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048925 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.048979 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.052837 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.057978 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191064 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191170 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191209 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191254 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191285 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.191370 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jcrd\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292459 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292516 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292537 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jcrd\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292557 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292609 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292660 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.292694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299629 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299745 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299745 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299843 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299640 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.299984 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.301257 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.301291 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fedd087c020fedaa53662fb68cb2c644ee54851c0d7a037bd330262bcce6f5b4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.303283 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.304965 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.305873 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.305975 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.308915 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.308968 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.309270 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.313178 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.331926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.332896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jcrd\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.383816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.677878 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qkchd" Jan 23 13:48:39 crc kubenswrapper[4771]: I0123 13:48:39.684946 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.986557 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.988021 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.996752 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.996801 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.996851 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6b68p" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.998428 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 13:48:40 crc kubenswrapper[4771]: I0123 13:48:40.998545 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.021227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.147936 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nxbfr"] Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.150763 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162089 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162158 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162222 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162348 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gvt\" (UniqueName: \"kubernetes.io/projected/c67783c2-46a6-49f8-86e7-e32d83a45526-kube-api-access-45gvt\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162521 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-config\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.162624 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.163119 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.167280 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.167576 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.167779 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6jjrx" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.184709 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr"] Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.203572 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7txgd"] Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.205976 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.217828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7txgd"] Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266457 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-ovn-controller-tls-certs\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266566 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-combined-ca-bundle\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266585 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-log-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/686807bb-241a-4fdb-bca8-0eba0745aed1-scripts\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266666 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltt2m\" (UniqueName: \"kubernetes.io/projected/686807bb-241a-4fdb-bca8-0eba0745aed1-kube-api-access-ltt2m\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266700 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266755 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.266792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45gvt\" (UniqueName: \"kubernetes.io/projected/c67783c2-46a6-49f8-86e7-e32d83a45526-kube-api-access-45gvt\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.267751 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.269738 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.269796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-config\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.270017 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.270982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-config\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.271252 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c67783c2-46a6-49f8-86e7-e32d83a45526-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.271495 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.282366 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.284825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.289868 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gvt\" (UniqueName: \"kubernetes.io/projected/c67783c2-46a6-49f8-86e7-e32d83a45526-kube-api-access-45gvt\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.320251 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.323743 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67783c2-46a6-49f8-86e7-e32d83a45526-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c67783c2-46a6-49f8-86e7-e32d83a45526\") " pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.327136 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.375810 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-run\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.375876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-lib\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376035 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7z72\" (UniqueName: \"kubernetes.io/projected/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-kube-api-access-q7z72\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376065 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-ovn-controller-tls-certs\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376141 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-log\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376195 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-combined-ca-bundle\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376224 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-log-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376303 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/686807bb-241a-4fdb-bca8-0eba0745aed1-scripts\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltt2m\" (UniqueName: \"kubernetes.io/projected/686807bb-241a-4fdb-bca8-0eba0745aed1-kube-api-access-ltt2m\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376506 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376633 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-scripts\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.376717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-etc-ovs\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.379114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.379912 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-run\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.380065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/686807bb-241a-4fdb-bca8-0eba0745aed1-var-log-ovn\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.380633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/686807bb-241a-4fdb-bca8-0eba0745aed1-scripts\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.393479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-ovn-controller-tls-certs\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.407684 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/686807bb-241a-4fdb-bca8-0eba0745aed1-combined-ca-bundle\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.418913 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltt2m\" (UniqueName: \"kubernetes.io/projected/686807bb-241a-4fdb-bca8-0eba0745aed1-kube-api-access-ltt2m\") pod \"ovn-controller-nxbfr\" (UID: \"686807bb-241a-4fdb-bca8-0eba0745aed1\") " pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480718 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7z72\" (UniqueName: \"kubernetes.io/projected/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-kube-api-access-q7z72\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-log\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480869 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-scripts\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480898 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-etc-ovs\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480933 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-run\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.480948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-lib\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.481174 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-lib\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.481479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-etc-ovs\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.481481 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-run\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.481575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-var-log\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.483999 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-scripts\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.505925 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7z72\" (UniqueName: \"kubernetes.io/projected/f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b-kube-api-access-q7z72\") pod \"ovn-controller-ovs-7txgd\" (UID: \"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b\") " pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:41 crc kubenswrapper[4771]: I0123 13:48:41.541551 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:48:43 crc kubenswrapper[4771]: I0123 13:48:43.990311 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:48:43 crc kubenswrapper[4771]: I0123 13:48:43.994318 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.019665 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.146919 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.146989 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.147011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgnz\" (UniqueName: \"kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.248523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.248606 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.248631 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgnz\" (UniqueName: \"kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.249545 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.249840 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.276611 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgnz\" (UniqueName: \"kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz\") pod \"redhat-operators-6lthk\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:44 crc kubenswrapper[4771]: I0123 13:48:44.332317 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.684364 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.686780 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.691664 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.691767 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.691893 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.691903 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dmtrm" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.722913 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/000f2478-86af-4e39-80c3-790a0457923e-kube-api-access-7dz59\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787724 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000f2478-86af-4e39-80c3-790a0457923e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787760 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787784 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-config\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787962 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.787997 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.788039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.788211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893535 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000f2478-86af-4e39-80c3-790a0457923e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893609 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-config\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893710 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893747 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893782 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893817 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.893909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/000f2478-86af-4e39-80c3-790a0457923e-kube-api-access-7dz59\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.894142 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.894793 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000f2478-86af-4e39-80c3-790a0457923e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.895456 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-config\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.897696 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000f2478-86af-4e39-80c3-790a0457923e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.913846 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.914246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.919886 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000f2478-86af-4e39-80c3-790a0457923e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.927872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dz59\" (UniqueName: \"kubernetes.io/projected/000f2478-86af-4e39-80c3-790a0457923e-kube-api-access-7dz59\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:45 crc kubenswrapper[4771]: I0123 13:48:45.931938 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"000f2478-86af-4e39-80c3-790a0457923e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.015528 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.372026 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.379378 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.414754 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.516438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.516543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hqrv\" (UniqueName: \"kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.516572 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.618706 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.618798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hqrv\" (UniqueName: \"kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.618850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.619337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.619671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.638839 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hqrv\" (UniqueName: \"kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv\") pod \"certified-operators-9ngnh\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:46 crc kubenswrapper[4771]: I0123 13:48:46.723148 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:48:47 crc kubenswrapper[4771]: I0123 13:48:47.426088 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:48:49 crc kubenswrapper[4771]: I0123 13:48:49.561569 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:48:49 crc kubenswrapper[4771]: I0123 13:48:49.562210 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q542n" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="registry-server" containerID="cri-o://82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" gracePeriod=2 Jan 23 13:48:51 crc kubenswrapper[4771]: I0123 13:48:51.099953 4771 generic.go:334] "Generic (PLEG): container finished" podID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerID="82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" exitCode=0 Jan 23 13:48:51 crc kubenswrapper[4771]: I0123 13:48:51.100004 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerDied","Data":"82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2"} Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.668285 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.674291 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.681305 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.809434 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.809547 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlbq8\" (UniqueName: \"kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.809713 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.912483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.912612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlbq8\" (UniqueName: \"kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.912735 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.913101 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.913216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:54 crc kubenswrapper[4771]: I0123 13:48:54.943647 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlbq8\" (UniqueName: \"kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8\") pod \"redhat-marketplace-r2vq7\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:55 crc kubenswrapper[4771]: I0123 13:48:55.005673 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:48:57 crc kubenswrapper[4771]: E0123 13:48:57.352583 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2 is running failed: container process not found" containerID="82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 13:48:57 crc kubenswrapper[4771]: E0123 13:48:57.352882 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2 is running failed: container process not found" containerID="82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 13:48:57 crc kubenswrapper[4771]: E0123 13:48:57.353209 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2 is running failed: container process not found" containerID="82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 13:48:57 crc kubenswrapper[4771]: E0123 13:48:57.353250 4771 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-q542n" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="registry-server" Jan 23 13:48:59 crc kubenswrapper[4771]: I0123 13:48:59.367974 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.671560 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.765645 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities\") pod \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.765980 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content\") pod \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.766337 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-927m7\" (UniqueName: \"kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7\") pod \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\" (UID: \"2be0c2bb-124a-4f4f-aec3-29edfaaaf554\") " Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.766400 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities" (OuterVolumeSpecName: "utilities") pod "2be0c2bb-124a-4f4f-aec3-29edfaaaf554" (UID: "2be0c2bb-124a-4f4f-aec3-29edfaaaf554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.766707 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.772959 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7" (OuterVolumeSpecName: "kube-api-access-927m7") pod "2be0c2bb-124a-4f4f-aec3-29edfaaaf554" (UID: "2be0c2bb-124a-4f4f-aec3-29edfaaaf554"). InnerVolumeSpecName "kube-api-access-927m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.821077 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2be0c2bb-124a-4f4f-aec3-29edfaaaf554" (UID: "2be0c2bb-124a-4f4f-aec3-29edfaaaf554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.869076 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-927m7\" (UniqueName: \"kubernetes.io/projected/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-kube-api-access-927m7\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:01 crc kubenswrapper[4771]: I0123 13:49:01.869115 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be0c2bb-124a-4f4f-aec3-29edfaaaf554-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.203154 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q542n" event={"ID":"2be0c2bb-124a-4f4f-aec3-29edfaaaf554","Type":"ContainerDied","Data":"0ce9ccf09e649690d56aadedad9d714601c2057d57647b78f58f6f817f2431e2"} Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.203210 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q542n" Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.203222 4771 scope.go:117] "RemoveContainer" containerID="82a43f850f7d6162c95cc3e9e58ff6ba1d101fb1a7be249f7186b6ffec6117f2" Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.234210 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.242478 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q542n"] Jan 23 13:49:02 crc kubenswrapper[4771]: I0123 13:49:02.406739 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 13:49:03 crc kubenswrapper[4771]: I0123 13:49:03.244824 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" path="/var/lib/kubelet/pods/2be0c2bb-124a-4f4f-aec3-29edfaaaf554/volumes" Jan 23 13:49:09 crc kubenswrapper[4771]: W0123 13:49:09.006387 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fac3169_8b55_40d5_8966_a31fd8f7ba7d.slice/crio-e04540970b90f725821520000006b76dfc837b196bacfc3c4526bb9b610f2c63 WatchSource:0}: Error finding container e04540970b90f725821520000006b76dfc837b196bacfc3c4526bb9b610f2c63: Status 404 returned error can't find the container with id e04540970b90f725821520000006b76dfc837b196bacfc3c4526bb9b610f2c63 Jan 23 13:49:09 crc kubenswrapper[4771]: I0123 13:49:09.278296 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerStarted","Data":"e04540970b90f725821520000006b76dfc837b196bacfc3c4526bb9b610f2c63"} Jan 23 13:49:09 crc kubenswrapper[4771]: I0123 13:49:09.410672 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7txgd"] Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.803380 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.803454 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.803593 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq68t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-66f4d755d5-ksg8n_openstack(18538bf0-cdae-4e6d-84c8-c8f9335e2ba7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.804783 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" podUID="18538bf0-cdae-4e6d-84c8-c8f9335e2ba7" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.841302 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.841359 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.841509 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn6qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5cf864db9c-rdpn7_openstack(31590df7-b974-4f61-8530-5713c2f887c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.842703 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" podUID="31590df7-b974-4f61-8530-5713c2f887c2" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.847444 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.847508 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.847662 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9lts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bbfdc9b97-87fwq_openstack(7f9447a5-17a1-4b31-b96c-26fedbb30f47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.848835 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" podUID="7f9447a5-17a1-4b31-b96c-26fedbb30f47" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.858008 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.858069 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.858186 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k44tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-54ddbcd685-tc9ls_openstack(bea41385-4d73-47af-94c4-9c9babe781d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.859567 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.860430 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.860459 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.860529 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwn86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c5cd96d89-drjcn_openstack(c19fc401-2298-4b38-9f40-0d6fa490445d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:49:09 crc kubenswrapper[4771]: E0123 13:49:09.861645 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" podUID="c19fc401-2298-4b38-9f40-0d6fa490445d" Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.119118 4771 scope.go:117] "RemoveContainer" containerID="f0f455ec790fb4e58daeb84a535074e13890c53a3760ea0b93161d967910a3b4" Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.294050 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7txgd" event={"ID":"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b","Type":"ContainerStarted","Data":"28a1b78e7efe8f8d34bfa5f08c484ad029252132c0c989d5ebc35bb85f366d33"} Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.298035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c67783c2-46a6-49f8-86e7-e32d83a45526","Type":"ContainerStarted","Data":"af6f1fbddd91a87c753ab7a26cb374078f6f59936a74d1e6c1b59656f4bae587"} Jan 23 13:49:10 crc kubenswrapper[4771]: E0123 13:49:10.303174 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" Jan 23 13:49:10 crc kubenswrapper[4771]: E0123 13:49:10.303584 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" podUID="31590df7-b974-4f61-8530-5713c2f887c2" Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.922526 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.924662 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:49:10 crc kubenswrapper[4771]: I0123 13:49:10.939538 4771 scope.go:117] "RemoveContainer" containerID="631f6ef7d591321bc5a8ab8152b9e559bd06f21b0fb6691fb7625e006a05255d" Jan 23 13:49:11 crc kubenswrapper[4771]: I0123 13:49:11.117263 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr"] Jan 23 13:49:11 crc kubenswrapper[4771]: I0123 13:49:11.125350 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:11 crc kubenswrapper[4771]: I0123 13:49:11.210086 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 13:49:11 crc kubenswrapper[4771]: W0123 13:49:11.873515 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77220d49_56d8_4882_a85e_c5772ea35ad1.slice/crio-386cd486e7397b26c04bd4495a0e5a36c7dbdba9627e9e69639c3e529fdabf6f WatchSource:0}: Error finding container 386cd486e7397b26c04bd4495a0e5a36c7dbdba9627e9e69639c3e529fdabf6f: Status 404 returned error can't find the container with id 386cd486e7397b26c04bd4495a0e5a36c7dbdba9627e9e69639c3e529fdabf6f Jan 23 13:49:11 crc kubenswrapper[4771]: W0123 13:49:11.883749 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b487610_fee5_485d_8034_78634876c316.slice/crio-a90596e3a0bfe2200823f397a442f9b32ce693c77c30fc9c957ee0370e7aa130 WatchSource:0}: Error finding container a90596e3a0bfe2200823f397a442f9b32ce693c77c30fc9c957ee0370e7aa130: Status 404 returned error can't find the container with id a90596e3a0bfe2200823f397a442f9b32ce693c77c30fc9c957ee0370e7aa130 Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.090647 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.102091 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.217340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwn86\" (UniqueName: \"kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86\") pod \"c19fc401-2298-4b38-9f40-0d6fa490445d\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.217543 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config\") pod \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.217645 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc\") pod \"c19fc401-2298-4b38-9f40-0d6fa490445d\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.217726 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lts\" (UniqueName: \"kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts\") pod \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\" (UID: \"7f9447a5-17a1-4b31-b96c-26fedbb30f47\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.217753 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config\") pod \"c19fc401-2298-4b38-9f40-0d6fa490445d\" (UID: \"c19fc401-2298-4b38-9f40-0d6fa490445d\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.218169 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config" (OuterVolumeSpecName: "config") pod "7f9447a5-17a1-4b31-b96c-26fedbb30f47" (UID: "7f9447a5-17a1-4b31-b96c-26fedbb30f47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.218614 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c19fc401-2298-4b38-9f40-0d6fa490445d" (UID: "c19fc401-2298-4b38-9f40-0d6fa490445d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.218670 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config" (OuterVolumeSpecName: "config") pod "c19fc401-2298-4b38-9f40-0d6fa490445d" (UID: "c19fc401-2298-4b38-9f40-0d6fa490445d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.224661 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts" (OuterVolumeSpecName: "kube-api-access-k9lts") pod "7f9447a5-17a1-4b31-b96c-26fedbb30f47" (UID: "7f9447a5-17a1-4b31-b96c-26fedbb30f47"). InnerVolumeSpecName "kube-api-access-k9lts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.226749 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86" (OuterVolumeSpecName: "kube-api-access-cwn86") pod "c19fc401-2298-4b38-9f40-0d6fa490445d" (UID: "c19fc401-2298-4b38-9f40-0d6fa490445d"). InnerVolumeSpecName "kube-api-access-cwn86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.319525 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lts\" (UniqueName: \"kubernetes.io/projected/7f9447a5-17a1-4b31-b96c-26fedbb30f47-kube-api-access-k9lts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.319566 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.319582 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwn86\" (UniqueName: \"kubernetes.io/projected/c19fc401-2298-4b38-9f40-0d6fa490445d-kube-api-access-cwn86\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.319595 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9447a5-17a1-4b31-b96c-26fedbb30f47-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.319607 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c19fc401-2298-4b38-9f40-0d6fa490445d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.324994 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" event={"ID":"c19fc401-2298-4b38-9f40-0d6fa490445d","Type":"ContainerDied","Data":"bf9be56a07ed8679c430e489619fe9327bbd146296f8a5c63ea2b92aa239e21f"} Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.325067 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c5cd96d89-drjcn" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.326838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerStarted","Data":"a90596e3a0bfe2200823f397a442f9b32ce693c77c30fc9c957ee0370e7aa130"} Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.328513 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr" event={"ID":"686807bb-241a-4fdb-bca8-0eba0745aed1","Type":"ContainerStarted","Data":"9c096d4e4487d1be4722d95352035f49cb422ab3725deb2968ba62da17cd9a5a"} Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.329942 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerStarted","Data":"386cd486e7397b26c04bd4495a0e5a36c7dbdba9627e9e69639c3e529fdabf6f"} Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.330763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" event={"ID":"7f9447a5-17a1-4b31-b96c-26fedbb30f47","Type":"ContainerDied","Data":"54305f24e288c5c231d6896553a988ad36cd6b3c30765fc79aa2593cfbd9d300"} Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.330828 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbfdc9b97-87fwq" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.393604 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.406633 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c5cd96d89-drjcn"] Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.427188 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.432247 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbfdc9b97-87fwq"] Jan 23 13:49:12 crc kubenswrapper[4771]: W0123 13:49:12.754622 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod000f2478_86af_4e39_80c3_790a0457923e.slice/crio-6ba4e3ec3e833c99a04e68d6f5f590fb454ce4672ebaa08dce873734b6571093 WatchSource:0}: Error finding container 6ba4e3ec3e833c99a04e68d6f5f590fb454ce4672ebaa08dce873734b6571093: Status 404 returned error can't find the container with id 6ba4e3ec3e833c99a04e68d6f5f590fb454ce4672ebaa08dce873734b6571093 Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.836240 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.929716 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq68t\" (UniqueName: \"kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t\") pod \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.929772 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config\") pod \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.929952 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc\") pod \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\" (UID: \"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7\") " Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.931272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7" (UID: "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.931281 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config" (OuterVolumeSpecName: "config") pod "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7" (UID: "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:12 crc kubenswrapper[4771]: I0123 13:49:12.943756 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t" (OuterVolumeSpecName: "kube-api-access-kq68t") pod "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7" (UID: "18538bf0-cdae-4e6d-84c8-c8f9335e2ba7"). InnerVolumeSpecName "kube-api-access-kq68t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.031426 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.031463 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq68t\" (UniqueName: \"kubernetes.io/projected/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-kube-api-access-kq68t\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.031475 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.241516 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9447a5-17a1-4b31-b96c-26fedbb30f47" path="/var/lib/kubelet/pods/7f9447a5-17a1-4b31-b96c-26fedbb30f47/volumes" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.242035 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19fc401-2298-4b38-9f40-0d6fa490445d" path="/var/lib/kubelet/pods/c19fc401-2298-4b38-9f40-0d6fa490445d/volumes" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.341683 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"000f2478-86af-4e39-80c3-790a0457923e","Type":"ContainerStarted","Data":"6ba4e3ec3e833c99a04e68d6f5f590fb454ce4672ebaa08dce873734b6571093"} Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.343864 4771 generic.go:334] "Generic (PLEG): container finished" podID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerID="fb3d12063d788e971cbe4398133ea05420c5bf5c855025bdfeeeb8a0dd990f77" exitCode=0 Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.343960 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerDied","Data":"fb3d12063d788e971cbe4398133ea05420c5bf5c855025bdfeeeb8a0dd990f77"} Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.345562 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" event={"ID":"18538bf0-cdae-4e6d-84c8-c8f9335e2ba7","Type":"ContainerDied","Data":"0053da20c0d9e2719bb861ff0ec34c5e613cf2168e32b21b3b3149c30fa133a8"} Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.345572 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66f4d755d5-ksg8n" Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.348190 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerStarted","Data":"0cfa971fe1a49d0ba9a7b6d3acb3e0e94477a2e6702092f14514716c6c5cf98f"} Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.420053 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:49:13 crc kubenswrapper[4771]: I0123 13:49:13.434261 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66f4d755d5-ksg8n"] Jan 23 13:49:14 crc kubenswrapper[4771]: E0123 13:49:14.213514 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 13:49:14 crc kubenswrapper[4771]: E0123 13:49:14.213922 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 13:49:14 crc kubenswrapper[4771]: E0123 13:49:14.214125 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jb2ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(b4fa8367-bad7-4681-93a1-835923d93421): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 13:49:14 crc kubenswrapper[4771]: E0123 13:49:14.215537 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="b4fa8367-bad7-4681-93a1-835923d93421" Jan 23 13:49:14 crc kubenswrapper[4771]: E0123 13:49:14.363136 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="b4fa8367-bad7-4681-93a1-835923d93421" Jan 23 13:49:15 crc kubenswrapper[4771]: I0123 13:49:15.241030 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18538bf0-cdae-4e6d-84c8-c8f9335e2ba7" path="/var/lib/kubelet/pods/18538bf0-cdae-4e6d-84c8-c8f9335e2ba7/volumes" Jan 23 13:49:15 crc kubenswrapper[4771]: I0123 13:49:15.388798 4771 generic.go:334] "Generic (PLEG): container finished" podID="3b487610-fee5-485d-8034-78634876c316" containerID="da6315340567dd403677a536d1f2c88fd41baed94f2905fc2f493db066765754" exitCode=0 Jan 23 13:49:15 crc kubenswrapper[4771]: I0123 13:49:15.388872 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerDied","Data":"da6315340567dd403677a536d1f2c88fd41baed94f2905fc2f493db066765754"} Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.410001 4771 generic.go:334] "Generic (PLEG): container finished" podID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerID="dd440dee2f96ebe143d8ebca2ea08035f599535ce1c361b1600c5777aa4d47ca" exitCode=0 Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.410189 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerDied","Data":"dd440dee2f96ebe143d8ebca2ea08035f599535ce1c361b1600c5777aa4d47ca"} Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.414038 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c67783c2-46a6-49f8-86e7-e32d83a45526","Type":"ContainerStarted","Data":"96e46d0b8bde2e46dfae21feec9f3dc287e9362e37447ceda9a0b51ebbf8bd47"} Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.420337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"add41260-19c8-4989-a0a9-97a93316c6e8","Type":"ContainerStarted","Data":"e4acc9393208e4d49c802e4a57dd208b07d7085cb8b380720b50af01eee3b5dc"} Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.422833 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"972f2298-461d-46ec-a00a-19ea21a500a5","Type":"ContainerStarted","Data":"e6389f800918019058d748285f9c3d2c1ae41896886e7ddf059a9f11a058f296"} Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.423099 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 13:49:16 crc kubenswrapper[4771]: I0123 13:49:16.457422 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=8.07421029 podStartE2EDuration="41.457383434s" podCreationTimestamp="2026-01-23 13:48:35 +0000 UTC" firstStartedPulling="2026-01-23 13:48:37.047860098 +0000 UTC m=+958.070397723" lastFinishedPulling="2026-01-23 13:49:10.431033242 +0000 UTC m=+991.453570867" observedRunningTime="2026-01-23 13:49:16.451097496 +0000 UTC m=+997.473635121" watchObservedRunningTime="2026-01-23 13:49:16.457383434 +0000 UTC m=+997.479921059" Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.434138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"34159c2a-f5ad-4b4c-a1c6-556001c43134","Type":"ContainerStarted","Data":"11f4869d99ffe7e8063e1f516c8ea473461de1d5e3d9ccb29753854645dab4c8"} Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.437186 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"000f2478-86af-4e39-80c3-790a0457923e","Type":"ContainerStarted","Data":"f27712251c43d9b57055c3ee89a0cd3f2bea11b281282f71a5b936a466fb83b9"} Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.439694 4771 generic.go:334] "Generic (PLEG): container finished" podID="f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b" containerID="0f06258ed2d290ab574db932ec4cfa7401c75ba87c06c23cfb21953a9c0d5a4c" exitCode=0 Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.439770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7txgd" event={"ID":"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b","Type":"ContainerDied","Data":"0f06258ed2d290ab574db932ec4cfa7401c75ba87c06c23cfb21953a9c0d5a4c"} Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.443998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerStarted","Data":"7d99e1c86e9e4ecf377470ad86e9ca700b7f9077628c6f89ad3b8868b41a0180"} Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.447001 4771 generic.go:334] "Generic (PLEG): container finished" podID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerID="1c455d8facc4648b27ae811b59a63c060de1c5406c6e81d74784e97d4505c308" exitCode=0 Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.447571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerDied","Data":"1c455d8facc4648b27ae811b59a63c060de1c5406c6e81d74784e97d4505c308"} Jan 23 13:49:17 crc kubenswrapper[4771]: I0123 13:49:17.452426 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"90863ead-98c1-4258-b980-919471f6d76c","Type":"ContainerStarted","Data":"7db8033a1756bcad3f640c82238ffecf95e1ed8855728e450aa2724c82058873"} Jan 23 13:49:18 crc kubenswrapper[4771]: I0123 13:49:18.469195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerStarted","Data":"524a79d3a4c9409e1f828a13d6135d17c8b9bff16db66c41802d6f0a58572f59"} Jan 23 13:49:18 crc kubenswrapper[4771]: I0123 13:49:18.475679 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerStarted","Data":"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20"} Jan 23 13:49:18 crc kubenswrapper[4771]: I0123 13:49:18.480878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7txgd" event={"ID":"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b","Type":"ContainerStarted","Data":"d383a921e2fc4596f2a498e1e3cb893c08331e474695867a437c9593be2b2b18"} Jan 23 13:49:18 crc kubenswrapper[4771]: I0123 13:49:18.486655 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerStarted","Data":"5bbc9e62abd1383c12a6cb588c9072958e0cf2334b20337e3d6428d96c39f306"} Jan 23 13:49:18 crc kubenswrapper[4771]: I0123 13:49:18.565255 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9ngnh" podStartSLOduration=29.638562337 podStartE2EDuration="32.565228864s" podCreationTimestamp="2026-01-23 13:48:46 +0000 UTC" firstStartedPulling="2026-01-23 13:49:13.897196771 +0000 UTC m=+994.919734396" lastFinishedPulling="2026-01-23 13:49:16.823863298 +0000 UTC m=+997.846400923" observedRunningTime="2026-01-23 13:49:18.550328333 +0000 UTC m=+999.572865978" watchObservedRunningTime="2026-01-23 13:49:18.565228864 +0000 UTC m=+999.587766499" Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.504376 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerStarted","Data":"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c"} Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.515365 4771 generic.go:334] "Generic (PLEG): container finished" podID="3b487610-fee5-485d-8034-78634876c316" containerID="524a79d3a4c9409e1f828a13d6135d17c8b9bff16db66c41802d6f0a58572f59" exitCode=0 Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.515911 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerDied","Data":"524a79d3a4c9409e1f828a13d6135d17c8b9bff16db66c41802d6f0a58572f59"} Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.536625 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr" event={"ID":"686807bb-241a-4fdb-bca8-0eba0745aed1","Type":"ContainerStarted","Data":"83f6e2f3d2dac98a0bfb6beb424461cbdef3a9ec86ed9bce54cfa5950233b53f"} Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.537009 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nxbfr" Jan 23 13:49:19 crc kubenswrapper[4771]: I0123 13:49:19.599536 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nxbfr" podStartSLOduration=34.969095827 podStartE2EDuration="38.599494668s" podCreationTimestamp="2026-01-23 13:48:41 +0000 UTC" firstStartedPulling="2026-01-23 13:49:11.893272311 +0000 UTC m=+992.915809936" lastFinishedPulling="2026-01-23 13:49:15.523671152 +0000 UTC m=+996.546208777" observedRunningTime="2026-01-23 13:49:19.589912236 +0000 UTC m=+1000.612449871" watchObservedRunningTime="2026-01-23 13:49:19.599494668 +0000 UTC m=+1000.622032293" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.179595 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.555638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerStarted","Data":"bbc68e29d18e5420f01046eebe851c9b12357a4df9d801077fce66219adb0ae1"} Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.558305 4771 generic.go:334] "Generic (PLEG): container finished" podID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerID="c6e4a3e7a50f1a1ebb31b38d4c4cdf33b60d371a7a628878d8b11cc79d7d2693" exitCode=0 Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.558437 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerDied","Data":"c6e4a3e7a50f1a1ebb31b38d4c4cdf33b60d371a7a628878d8b11cc79d7d2693"} Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.560311 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"000f2478-86af-4e39-80c3-790a0457923e","Type":"ContainerStarted","Data":"5680a808d77f6233f02add5d93a6880c7eef3d65d445ae5dd7c7250d662a80ce"} Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.564020 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7txgd" event={"ID":"f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b","Type":"ContainerStarted","Data":"3f32687b74bf6943aa593df0864bc076f1b27f0c0f25533a11cbefefd10f1ad0"} Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.564155 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.564210 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.565916 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c67783c2-46a6-49f8-86e7-e32d83a45526","Type":"ContainerStarted","Data":"36f046142bb87d4af4f3571ff4a09f6c23dfd7cb8e4864ca6852203e6b43fc1d"} Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.647045 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7txgd" podStartSLOduration=35.194617543 podStartE2EDuration="40.647024035s" podCreationTimestamp="2026-01-23 13:48:41 +0000 UTC" firstStartedPulling="2026-01-23 13:49:09.964093339 +0000 UTC m=+990.986630964" lastFinishedPulling="2026-01-23 13:49:15.416499831 +0000 UTC m=+996.439037456" observedRunningTime="2026-01-23 13:49:21.64527787 +0000 UTC m=+1002.667815495" watchObservedRunningTime="2026-01-23 13:49:21.647024035 +0000 UTC m=+1002.669561660" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.648832 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lthk" podStartSLOduration=33.192245157 podStartE2EDuration="38.648826311s" podCreationTimestamp="2026-01-23 13:48:43 +0000 UTC" firstStartedPulling="2026-01-23 13:49:15.418109541 +0000 UTC m=+996.440647166" lastFinishedPulling="2026-01-23 13:49:20.874690695 +0000 UTC m=+1001.897228320" observedRunningTime="2026-01-23 13:49:21.603958316 +0000 UTC m=+1002.626495941" watchObservedRunningTime="2026-01-23 13:49:21.648826311 +0000 UTC m=+1002.671363936" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.694610 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=29.726435193 podStartE2EDuration="37.694591386s" podCreationTimestamp="2026-01-23 13:48:44 +0000 UTC" firstStartedPulling="2026-01-23 13:49:12.759176313 +0000 UTC m=+993.781713938" lastFinishedPulling="2026-01-23 13:49:20.727332506 +0000 UTC m=+1001.749870131" observedRunningTime="2026-01-23 13:49:21.688889726 +0000 UTC m=+1002.711427351" watchObservedRunningTime="2026-01-23 13:49:21.694591386 +0000 UTC m=+1002.717129011" Jan 23 13:49:21 crc kubenswrapper[4771]: I0123 13:49:21.719095 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.808926956 podStartE2EDuration="42.719075338s" podCreationTimestamp="2026-01-23 13:48:39 +0000 UTC" firstStartedPulling="2026-01-23 13:49:09.789143839 +0000 UTC m=+990.811681464" lastFinishedPulling="2026-01-23 13:49:20.699292221 +0000 UTC m=+1001.721829846" observedRunningTime="2026-01-23 13:49:21.714580197 +0000 UTC m=+1002.737117832" watchObservedRunningTime="2026-01-23 13:49:21.719075338 +0000 UTC m=+1002.741612963" Jan 23 13:49:22 crc kubenswrapper[4771]: I0123 13:49:22.016649 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 13:49:22 crc kubenswrapper[4771]: I0123 13:49:22.062402 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 13:49:22 crc kubenswrapper[4771]: I0123 13:49:22.576598 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 13:49:22 crc kubenswrapper[4771]: I0123 13:49:22.628740 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 13:49:22 crc kubenswrapper[4771]: I0123 13:49:22.957389 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.004299 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:23 crc kubenswrapper[4771]: E0123 13:49:23.004923 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="registry-server" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.004949 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="registry-server" Jan 23 13:49:23 crc kubenswrapper[4771]: E0123 13:49:23.004964 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="extract-utilities" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.004972 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="extract-utilities" Jan 23 13:49:23 crc kubenswrapper[4771]: E0123 13:49:23.004985 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="extract-content" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.004994 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="extract-content" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.005219 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be0c2bb-124a-4f4f-aec3-29edfaaaf554" containerName="registry-server" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.006552 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.019028 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.026673 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.060126 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzpzs\" (UniqueName: \"kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.060231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.060261 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.060310 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.106120 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bt9nt"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.116559 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.119802 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.149358 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bt9nt"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161343 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovs-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161425 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161471 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161493 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2lq\" (UniqueName: \"kubernetes.io/projected/47d1109c-2e29-4a97-9c19-b4b50b2e4014-kube-api-access-jh2lq\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d1109c-2e29-4a97-9c19-b4b50b2e4014-config\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161929 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzpzs\" (UniqueName: \"kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.161956 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-combined-ca-bundle\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.162093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovn-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.162380 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.162977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.163111 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.200429 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzpzs\" (UniqueName: \"kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs\") pod \"dnsmasq-dns-6b4f6b59d9-2kxlz\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.283611 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovn-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.283819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovs-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.283865 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.284001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2lq\" (UniqueName: \"kubernetes.io/projected/47d1109c-2e29-4a97-9c19-b4b50b2e4014-kube-api-access-jh2lq\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.284395 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d1109c-2e29-4a97-9c19-b4b50b2e4014-config\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.284567 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-combined-ca-bundle\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.285666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovn-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.286735 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/47d1109c-2e29-4a97-9c19-b4b50b2e4014-ovs-rundir\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.287778 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d1109c-2e29-4a97-9c19-b4b50b2e4014-config\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.296271 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.322363 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47d1109c-2e29-4a97-9c19-b4b50b2e4014-combined-ca-bundle\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.335445 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2lq\" (UniqueName: \"kubernetes.io/projected/47d1109c-2e29-4a97-9c19-b4b50b2e4014-kube-api-access-jh2lq\") pod \"ovn-controller-metrics-bt9nt\" (UID: \"47d1109c-2e29-4a97-9c19-b4b50b2e4014\") " pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.336002 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.336240 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.433431 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bt9nt" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.451017 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.487540 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.534802 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.537012 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.540952 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.548421 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.591504 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n292p\" (UniqueName: \"kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.591562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.591659 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.591680 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.591717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.603310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerStarted","Data":"83318bc801f6bfc131edddabcb8846608891c00b1f43889c86ef1ee8ed372a9a"} Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.608556 4771 generic.go:334] "Generic (PLEG): container finished" podID="31590df7-b974-4f61-8530-5713c2f887c2" containerID="e244546f34b538596d56a3f7efbda99dee26507a725016bdd923267943ddb5da" exitCode=0 Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.609304 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" event={"ID":"31590df7-b974-4f61-8530-5713c2f887c2","Type":"ContainerDied","Data":"e244546f34b538596d56a3f7efbda99dee26507a725016bdd923267943ddb5da"} Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.609336 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693305 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693502 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693557 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693765 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n292p\" (UniqueName: \"kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.693973 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2vq7" podStartSLOduration=24.767970691 podStartE2EDuration="29.693949212s" podCreationTimestamp="2026-01-23 13:48:54 +0000 UTC" firstStartedPulling="2026-01-23 13:49:17.448389474 +0000 UTC m=+998.470927099" lastFinishedPulling="2026-01-23 13:49:22.374367995 +0000 UTC m=+1003.396905620" observedRunningTime="2026-01-23 13:49:23.649204141 +0000 UTC m=+1004.671741766" watchObservedRunningTime="2026-01-23 13:49:23.693949212 +0000 UTC m=+1004.716486857" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.694585 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.697724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.719320 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.722143 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.758256 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.770697 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n292p\" (UniqueName: \"kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p\") pod \"dnsmasq-dns-6656dd9c95-t6pv5\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:23 crc kubenswrapper[4771]: I0123 13:49:23.860950 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.018985 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.023141 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.032555 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.032836 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vrm6r" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.032892 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.032934 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.042034 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109047 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-scripts\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109108 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-config\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109271 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjd7\" (UniqueName: \"kubernetes.io/projected/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-kube-api-access-wgjd7\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.109476 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.189213 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.211531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config\") pod \"31590df7-b974-4f61-8530-5713c2f887c2\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.211633 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc\") pod \"31590df7-b974-4f61-8530-5713c2f887c2\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.211759 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn6qr\" (UniqueName: \"kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr\") pod \"31590df7-b974-4f61-8530-5713c2f887c2\" (UID: \"31590df7-b974-4f61-8530-5713c2f887c2\") " Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212256 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212312 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjd7\" (UniqueName: \"kubernetes.io/projected/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-kube-api-access-wgjd7\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-scripts\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-config\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.212531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.220082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.220810 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-scripts\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.221097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-config\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.223965 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.237755 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr" (OuterVolumeSpecName: "kube-api-access-mn6qr") pod "31590df7-b974-4f61-8530-5713c2f887c2" (UID: "31590df7-b974-4f61-8530-5713c2f887c2"). InnerVolumeSpecName "kube-api-access-mn6qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.279101 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31590df7-b974-4f61-8530-5713c2f887c2" (UID: "31590df7-b974-4f61-8530-5713c2f887c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.286474 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjd7\" (UniqueName: \"kubernetes.io/projected/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-kube-api-access-wgjd7\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.286935 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.287428 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4f9071a-a9f7-46ca-905f-aac12e33f2f7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e4f9071a-a9f7-46ca-905f-aac12e33f2f7\") " pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.288049 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config" (OuterVolumeSpecName: "config") pod "31590df7-b974-4f61-8530-5713c2f887c2" (UID: "31590df7-b974-4f61-8530-5713c2f887c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.314199 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.314634 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31590df7-b974-4f61-8530-5713c2f887c2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.314647 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn6qr\" (UniqueName: \"kubernetes.io/projected/31590df7-b974-4f61-8530-5713c2f887c2-kube-api-access-mn6qr\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.337841 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.339533 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.349281 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bt9nt"] Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.365003 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.542345 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:24 crc kubenswrapper[4771]: W0123 13:49:24.638115 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e4650e_d2a3_4057_8348_b7ea3a97f439.slice/crio-684df138051e31d3b1ab0ef7a3d35b73f4ff1e8cffd9624922b482d2a5772f2d WatchSource:0}: Error finding container 684df138051e31d3b1ab0ef7a3d35b73f4ff1e8cffd9624922b482d2a5772f2d: Status 404 returned error can't find the container with id 684df138051e31d3b1ab0ef7a3d35b73f4ff1e8cffd9624922b482d2a5772f2d Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.660048 4771 generic.go:334] "Generic (PLEG): container finished" podID="bea41385-4d73-47af-94c4-9c9babe781d2" containerID="93b090e7690bf6607fcea4d534bc7a3fcb88e88bafb6a3f9c9cce4eed40e77ad" exitCode=0 Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.660107 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" event={"ID":"bea41385-4d73-47af-94c4-9c9babe781d2","Type":"ContainerDied","Data":"93b090e7690bf6607fcea4d534bc7a3fcb88e88bafb6a3f9c9cce4eed40e77ad"} Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.678662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bt9nt" event={"ID":"47d1109c-2e29-4a97-9c19-b4b50b2e4014","Type":"ContainerStarted","Data":"8ce3646c5dc12bba5896cf78fd95865476aab5004274f7b9bf95ce38c10fbd09"} Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.680782 4771 generic.go:334] "Generic (PLEG): container finished" podID="34159c2a-f5ad-4b4c-a1c6-556001c43134" containerID="11f4869d99ffe7e8063e1f516c8ea473461de1d5e3d9ccb29753854645dab4c8" exitCode=0 Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.680824 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"34159c2a-f5ad-4b4c-a1c6-556001c43134","Type":"ContainerDied","Data":"11f4869d99ffe7e8063e1f516c8ea473461de1d5e3d9ccb29753854645dab4c8"} Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.741560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" event={"ID":"31590df7-b974-4f61-8530-5713c2f887c2","Type":"ContainerDied","Data":"9182554cbec0d38b2d2ad663c01d90c613cfae4cb3f7776360c11e65979b6a78"} Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.741823 4771 scope.go:117] "RemoveContainer" containerID="e244546f34b538596d56a3f7efbda99dee26507a725016bdd923267943ddb5da" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.744643 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf864db9c-rdpn7" Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.965998 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.979899 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cf864db9c-rdpn7"] Jan 23 13:49:24 crc kubenswrapper[4771]: I0123 13:49:24.990965 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.005791 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.005860 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.077191 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.123389 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.252574 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31590df7-b974-4f61-8530-5713c2f887c2" path="/var/lib/kubelet/pods/31590df7-b974-4f61-8530-5713c2f887c2/volumes" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.284834 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config\") pod \"bea41385-4d73-47af-94c4-9c9babe781d2\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.285273 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc\") pod \"bea41385-4d73-47af-94c4-9c9babe781d2\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.285348 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k44tw\" (UniqueName: \"kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw\") pod \"bea41385-4d73-47af-94c4-9c9babe781d2\" (UID: \"bea41385-4d73-47af-94c4-9c9babe781d2\") " Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.291993 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw" (OuterVolumeSpecName: "kube-api-access-k44tw") pod "bea41385-4d73-47af-94c4-9c9babe781d2" (UID: "bea41385-4d73-47af-94c4-9c9babe781d2"). InnerVolumeSpecName "kube-api-access-k44tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.315211 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bea41385-4d73-47af-94c4-9c9babe781d2" (UID: "bea41385-4d73-47af-94c4-9c9babe781d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.337915 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config" (OuterVolumeSpecName: "config") pod "bea41385-4d73-47af-94c4-9c9babe781d2" (UID: "bea41385-4d73-47af-94c4-9c9babe781d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.387701 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k44tw\" (UniqueName: \"kubernetes.io/projected/bea41385-4d73-47af-94c4-9c9babe781d2-kube-api-access-k44tw\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.387741 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.387753 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea41385-4d73-47af-94c4-9c9babe781d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.489523 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lthk" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="registry-server" probeResult="failure" output=< Jan 23 13:49:25 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 13:49:25 crc kubenswrapper[4771]: > Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.752886 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"34159c2a-f5ad-4b4c-a1c6-556001c43134","Type":"ContainerStarted","Data":"3c79957324f2121033c7ef59a5bd5a9f12c4b05eaf462f23da6d8d0e6c446d55"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.757347 4771 generic.go:334] "Generic (PLEG): container finished" podID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerID="125376e1bff67cbb711f8602b1736c40460d44c57087e5d692b34d9559c8df32" exitCode=0 Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.757385 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" event={"ID":"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98","Type":"ContainerDied","Data":"125376e1bff67cbb711f8602b1736c40460d44c57087e5d692b34d9559c8df32"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.757447 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" event={"ID":"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98","Type":"ContainerStarted","Data":"1a39622795fcee2480fa11d46e5b0a378fa7f05d8940510d6901929d93fc194c"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.759166 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e4f9071a-a9f7-46ca-905f-aac12e33f2f7","Type":"ContainerStarted","Data":"9b9b0d327515d386b2aba7085b90a8f8b436f3dea40811da791b3df64df90cea"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.761295 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" event={"ID":"bea41385-4d73-47af-94c4-9c9babe781d2","Type":"ContainerDied","Data":"890de292640aacc5b796ab00760b51b0e5b1287f98eb73110efe9455dcdd16cb"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.761344 4771 scope.go:117] "RemoveContainer" containerID="93b090e7690bf6607fcea4d534bc7a3fcb88e88bafb6a3f9c9cce4eed40e77ad" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.761372 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54ddbcd685-tc9ls" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.763658 4771 generic.go:334] "Generic (PLEG): container finished" podID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerID="e210858a649f243efcb71439d03fc9798e72d50ba799b8d7f0537d29a3246433" exitCode=0 Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.764570 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" event={"ID":"d5e4650e-d2a3-4057-8348-b7ea3a97f439","Type":"ContainerDied","Data":"e210858a649f243efcb71439d03fc9798e72d50ba799b8d7f0537d29a3246433"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.764614 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" event={"ID":"d5e4650e-d2a3-4057-8348-b7ea3a97f439","Type":"ContainerStarted","Data":"684df138051e31d3b1ab0ef7a3d35b73f4ff1e8cffd9624922b482d2a5772f2d"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.785320 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.033338009 podStartE2EDuration="53.785298782s" podCreationTimestamp="2026-01-23 13:48:32 +0000 UTC" firstStartedPulling="2026-01-23 13:48:36.67560801 +0000 UTC m=+957.698145635" lastFinishedPulling="2026-01-23 13:49:10.427568783 +0000 UTC m=+991.450106408" observedRunningTime="2026-01-23 13:49:25.779133017 +0000 UTC m=+1006.801670662" watchObservedRunningTime="2026-01-23 13:49:25.785298782 +0000 UTC m=+1006.807836407" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.786764 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bt9nt" event={"ID":"47d1109c-2e29-4a97-9c19-b4b50b2e4014","Type":"ContainerStarted","Data":"1097b139b8cd956226e13d6cbc589ea5d297f57e416d1a0dc61d785f387b79d3"} Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.861050 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bt9nt" podStartSLOduration=2.86099594 podStartE2EDuration="2.86099594s" podCreationTimestamp="2026-01-23 13:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:25.852995818 +0000 UTC m=+1006.875533443" watchObservedRunningTime="2026-01-23 13:49:25.86099594 +0000 UTC m=+1006.883533565" Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.891991 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:49:25 crc kubenswrapper[4771]: I0123 13:49:25.908171 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54ddbcd685-tc9ls"] Jan 23 13:49:26 crc kubenswrapper[4771]: I0123 13:49:26.101238 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-r2vq7" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="registry-server" probeResult="failure" output=< Jan 23 13:49:26 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 13:49:26 crc kubenswrapper[4771]: > Jan 23 13:49:26 crc kubenswrapper[4771]: I0123 13:49:26.723845 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:26 crc kubenswrapper[4771]: I0123 13:49:26.724447 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:26 crc kubenswrapper[4771]: I0123 13:49:26.790673 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:26 crc kubenswrapper[4771]: I0123 13:49:26.864235 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.214586 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.270911 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" path="/var/lib/kubelet/pods/bea41385-4d73-47af-94c4-9c9babe781d2/volumes" Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.820474 4771 generic.go:334] "Generic (PLEG): container finished" podID="90863ead-98c1-4258-b980-919471f6d76c" containerID="7db8033a1756bcad3f640c82238ffecf95e1ed8855728e450aa2724c82058873" exitCode=0 Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.820560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"90863ead-98c1-4258-b980-919471f6d76c","Type":"ContainerDied","Data":"7db8033a1756bcad3f640c82238ffecf95e1ed8855728e450aa2724c82058873"} Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.835680 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" event={"ID":"d5e4650e-d2a3-4057-8348-b7ea3a97f439","Type":"ContainerStarted","Data":"8b3cd81870368d9472e02548426f38459e747114973615e5bddac190e7c8fa33"} Jan 23 13:49:27 crc kubenswrapper[4771]: I0123 13:49:27.993085 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.036864 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:49:28 crc kubenswrapper[4771]: E0123 13:49:28.037217 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31590df7-b974-4f61-8530-5713c2f887c2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.037230 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="31590df7-b974-4f61-8530-5713c2f887c2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: E0123 13:49:28.037257 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.037264 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.045588 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="31590df7-b974-4f61-8530-5713c2f887c2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.045641 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea41385-4d73-47af-94c4-9c9babe781d2" containerName="init" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.046769 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.081764 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.204153 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.204395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxhv\" (UniqueName: \"kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.204618 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.204748 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.204845 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.306859 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.306922 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.306957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.307022 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.307059 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxhv\" (UniqueName: \"kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.308292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.308329 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.308343 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.308981 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.327658 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxhv\" (UniqueName: \"kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv\") pod \"dnsmasq-dns-858b8668f9-p866q\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.445856 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.846316 4771 generic.go:334] "Generic (PLEG): container finished" podID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerID="252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c" exitCode=0 Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.846482 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerDied","Data":"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c"} Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.850886 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" event={"ID":"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98","Type":"ContainerStarted","Data":"3a0792d437fd829601da985a638f8723eaf333f603842808f670fa7fd50e7998"} Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.851065 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.854382 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="dnsmasq-dns" containerID="cri-o://8b3cd81870368d9472e02548426f38459e747114973615e5bddac190e7c8fa33" gracePeriod=10 Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.854483 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"90863ead-98c1-4258-b980-919471f6d76c","Type":"ContainerStarted","Data":"143ca16db74bd57ea03284824b99a62d5d68bc24e2b2d05f76195c7c38a6df19"} Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.854639 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9ngnh" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="registry-server" containerID="cri-o://5bbc9e62abd1383c12a6cb588c9072958e0cf2334b20337e3d6428d96c39f306" gracePeriod=2 Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.854684 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.905127 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" podStartSLOduration=5.905103563 podStartE2EDuration="5.905103563s" podCreationTimestamp="2026-01-23 13:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:28.901718515 +0000 UTC m=+1009.924256150" watchObservedRunningTime="2026-01-23 13:49:28.905103563 +0000 UTC m=+1009.927641188" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.937097 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" podStartSLOduration=6.937074531 podStartE2EDuration="6.937074531s" podCreationTimestamp="2026-01-23 13:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:28.925680071 +0000 UTC m=+1009.948217696" watchObservedRunningTime="2026-01-23 13:49:28.937074531 +0000 UTC m=+1009.959612156" Jan 23 13:49:28 crc kubenswrapper[4771]: I0123 13:49:28.958927 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.721819538 podStartE2EDuration="54.958909609s" podCreationTimestamp="2026-01-23 13:48:34 +0000 UTC" firstStartedPulling="2026-01-23 13:48:36.630525366 +0000 UTC m=+957.653062991" lastFinishedPulling="2026-01-23 13:49:10.867615437 +0000 UTC m=+991.890153062" observedRunningTime="2026-01-23 13:49:28.953288572 +0000 UTC m=+1009.975826207" watchObservedRunningTime="2026-01-23 13:49:28.958909609 +0000 UTC m=+1009.981447234" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.161151 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.167775 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.171112 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.171143 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.171509 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.181541 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-stf99" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.195624 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.324849 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.324972 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cch2q\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-kube-api-access-cch2q\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.326014 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-cache\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.328969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb429d80-3c7c-4014-9a5c-d40256e70014-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.329114 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-lock\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.329247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432033 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.432325 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.432737 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cch2q\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-kube-api-access-cch2q\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432839 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-cache\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432884 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb429d80-3c7c-4014-9a5c-d40256e70014-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.432934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-lock\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.433695 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-cache\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.433754 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.433783 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cb429d80-3c7c-4014-9a5c-d40256e70014-lock\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.435596 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:49:29.933848766 +0000 UTC m=+1010.956386451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.445896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb429d80-3c7c-4014-9a5c-d40256e70014-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.453438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cch2q\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-kube-api-access-cch2q\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.474803 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.520615 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-25zc4"] Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.522466 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.525886 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.528072 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.528136 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.531043 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25zc4"] Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637059 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637141 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8nx6\" (UniqueName: \"kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637171 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637225 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637243 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637279 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.637632 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739297 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739386 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739481 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739510 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8nx6\" (UniqueName: \"kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.739589 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.740312 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.742013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.742688 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.743926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.747852 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.748986 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.767920 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8nx6\" (UniqueName: \"kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6\") pod \"swift-ring-rebalance-25zc4\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.868255 4771 generic.go:334] "Generic (PLEG): container finished" podID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerID="8b3cd81870368d9472e02548426f38459e747114973615e5bddac190e7c8fa33" exitCode=0 Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.868334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" event={"ID":"d5e4650e-d2a3-4057-8348-b7ea3a97f439","Type":"ContainerDied","Data":"8b3cd81870368d9472e02548426f38459e747114973615e5bddac190e7c8fa33"} Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.877645 4771 generic.go:334] "Generic (PLEG): container finished" podID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerID="5bbc9e62abd1383c12a6cb588c9072958e0cf2334b20337e3d6428d96c39f306" exitCode=0 Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.877710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerDied","Data":"5bbc9e62abd1383c12a6cb588c9072958e0cf2334b20337e3d6428d96c39f306"} Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.878041 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:29 crc kubenswrapper[4771]: I0123 13:49:29.945979 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.946622 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.946668 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:29 crc kubenswrapper[4771]: E0123 13:49:29.946740 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:49:30.946719918 +0000 UTC m=+1011.969257543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:30 crc kubenswrapper[4771]: I0123 13:49:30.965494 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:30 crc kubenswrapper[4771]: E0123 13:49:30.965744 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:30 crc kubenswrapper[4771]: E0123 13:49:30.965959 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:30 crc kubenswrapper[4771]: E0123 13:49:30.966022 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:49:32.96600333 +0000 UTC m=+1013.988540955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.239487 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.343512 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.429300 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzpzs\" (UniqueName: \"kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs\") pod \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.429571 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc\") pod \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.429644 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb\") pod \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.429988 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config\") pod \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\" (UID: \"d5e4650e-d2a3-4057-8348-b7ea3a97f439\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.481732 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs" (OuterVolumeSpecName: "kube-api-access-mzpzs") pod "d5e4650e-d2a3-4057-8348-b7ea3a97f439" (UID: "d5e4650e-d2a3-4057-8348-b7ea3a97f439"). InnerVolumeSpecName "kube-api-access-mzpzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.530479 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.556444 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content\") pod \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.556814 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hqrv\" (UniqueName: \"kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv\") pod \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.557018 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities\") pod \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\" (UID: \"0fac3169-8b55-40d5-8966-a31fd8f7ba7d\") " Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.564578 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities" (OuterVolumeSpecName: "utilities") pod "0fac3169-8b55-40d5-8966-a31fd8f7ba7d" (UID: "0fac3169-8b55-40d5-8966-a31fd8f7ba7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.565365 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzpzs\" (UniqueName: \"kubernetes.io/projected/d5e4650e-d2a3-4057-8348-b7ea3a97f439-kube-api-access-mzpzs\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.567604 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.590726 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv" (OuterVolumeSpecName: "kube-api-access-2hqrv") pod "0fac3169-8b55-40d5-8966-a31fd8f7ba7d" (UID: "0fac3169-8b55-40d5-8966-a31fd8f7ba7d"). InnerVolumeSpecName "kube-api-access-2hqrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.606221 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5e4650e-d2a3-4057-8348-b7ea3a97f439" (UID: "d5e4650e-d2a3-4057-8348-b7ea3a97f439"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.615536 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-25zc4"] Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.624656 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5e4650e-d2a3-4057-8348-b7ea3a97f439" (UID: "d5e4650e-d2a3-4057-8348-b7ea3a97f439"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.652155 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config" (OuterVolumeSpecName: "config") pod "d5e4650e-d2a3-4057-8348-b7ea3a97f439" (UID: "d5e4650e-d2a3-4057-8348-b7ea3a97f439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.676470 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.676514 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.676528 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5e4650e-d2a3-4057-8348-b7ea3a97f439-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.676541 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hqrv\" (UniqueName: \"kubernetes.io/projected/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-kube-api-access-2hqrv\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.726363 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fac3169-8b55-40d5-8966-a31fd8f7ba7d" (UID: "0fac3169-8b55-40d5-8966-a31fd8f7ba7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.779002 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fac3169-8b55-40d5-8966-a31fd8f7ba7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.915014 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ngnh" event={"ID":"0fac3169-8b55-40d5-8966-a31fd8f7ba7d","Type":"ContainerDied","Data":"e04540970b90f725821520000006b76dfc837b196bacfc3c4526bb9b610f2c63"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.915111 4771 scope.go:117] "RemoveContainer" containerID="5bbc9e62abd1383c12a6cb588c9072958e0cf2334b20337e3d6428d96c39f306" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.915055 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ngnh" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.923419 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e4f9071a-a9f7-46ca-905f-aac12e33f2f7","Type":"ContainerStarted","Data":"30282e8ce1b0a48d994ca4dbdea3dfa61841dfe13d5d0213cd147dcafc0a1d91"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.923489 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e4f9071a-a9f7-46ca-905f-aac12e33f2f7","Type":"ContainerStarted","Data":"b36fe42fa8ccbb6b1f084aac003cc4e3e8b0c41e005158224e24ca20dbff07b6"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.926696 4771 generic.go:334] "Generic (PLEG): container finished" podID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerID="0689993ce178a84e157f2f38d9e360cdf7ecc3f6236d6ce2baf7b3ac37e3e1e7" exitCode=0 Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.926762 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-858b8668f9-p866q" event={"ID":"0eee851c-9eea-48a7-a2c1-3444eb1738de","Type":"ContainerDied","Data":"0689993ce178a84e157f2f38d9e360cdf7ecc3f6236d6ce2baf7b3ac37e3e1e7"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.926841 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-858b8668f9-p866q" event={"ID":"0eee851c-9eea-48a7-a2c1-3444eb1738de","Type":"ContainerStarted","Data":"86cb50c7c70139ced0754ebb61c9f96cd8b7b7853085d7c7a0347141c0d8a32f"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.930064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" event={"ID":"d5e4650e-d2a3-4057-8348-b7ea3a97f439","Type":"ContainerDied","Data":"684df138051e31d3b1ab0ef7a3d35b73f4ff1e8cffd9624922b482d2a5772f2d"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.931392 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f6b59d9-2kxlz" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.938515 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25zc4" event={"ID":"de1618cb-bde8-4c44-846b-aabcbb2e3698","Type":"ContainerStarted","Data":"60f76967e8ae606599c09f90c9cc473bec3d20f31a316ddaf8586b2362899a95"} Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.945057 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.250758367 podStartE2EDuration="9.945037015s" podCreationTimestamp="2026-01-23 13:49:23 +0000 UTC" firstStartedPulling="2026-01-23 13:49:25.102209388 +0000 UTC m=+1006.124747003" lastFinishedPulling="2026-01-23 13:49:31.796488026 +0000 UTC m=+1012.819025651" observedRunningTime="2026-01-23 13:49:32.942117844 +0000 UTC m=+1013.964655479" watchObservedRunningTime="2026-01-23 13:49:32.945037015 +0000 UTC m=+1013.967574640" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.970588 4771 scope.go:117] "RemoveContainer" containerID="dd440dee2f96ebe143d8ebca2ea08035f599535ce1c361b1600c5777aa4d47ca" Jan 23 13:49:32 crc kubenswrapper[4771]: I0123 13:49:32.984593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:32 crc kubenswrapper[4771]: E0123 13:49:32.985873 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:32 crc kubenswrapper[4771]: E0123 13:49:32.985901 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:32 crc kubenswrapper[4771]: E0123 13:49:32.985958 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:49:36.985939337 +0000 UTC m=+1018.008476962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.084864 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.104745 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b4f6b59d9-2kxlz"] Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.115382 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.120047 4771 scope.go:117] "RemoveContainer" containerID="fb3d12063d788e971cbe4398133ea05420c5bf5c855025bdfeeeb8a0dd990f77" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.126718 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9ngnh"] Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.194593 4771 scope.go:117] "RemoveContainer" containerID="8b3cd81870368d9472e02548426f38459e747114973615e5bddac190e7c8fa33" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.242562 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" path="/var/lib/kubelet/pods/0fac3169-8b55-40d5-8966-a31fd8f7ba7d/volumes" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.243333 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" path="/var/lib/kubelet/pods/d5e4650e-d2a3-4057-8348-b7ea3a97f439/volumes" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.250495 4771 scope.go:117] "RemoveContainer" containerID="e210858a649f243efcb71439d03fc9798e72d50ba799b8d7f0537d29a3246433" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.863466 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.948127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-858b8668f9-p866q" event={"ID":"0eee851c-9eea-48a7-a2c1-3444eb1738de","Type":"ContainerStarted","Data":"0950f7746297950384c56a9218038f2a7ee2b7618033713147fbb195a90e8ec3"} Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.948293 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.948324 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:33 crc kubenswrapper[4771]: I0123 13:49:33.973863 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-858b8668f9-p866q" podStartSLOduration=5.973843518 podStartE2EDuration="5.973843518s" podCreationTimestamp="2026-01-23 13:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:33.96820708 +0000 UTC m=+1014.990744725" watchObservedRunningTime="2026-01-23 13:49:33.973843518 +0000 UTC m=+1014.996381143" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.394068 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.447704 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.615561 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.615629 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.748612 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 13:49:34 crc kubenswrapper[4771]: I0123 13:49:34.940003 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.063472 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.121125 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.134497 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.552650 4771 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.243:46412->38.102.83.243:45109: write tcp 38.102.83.243:46412->38.102.83.243:45109: write: connection reset by peer Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.683271 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.683329 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.872328 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-q42h4"] Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.883439 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="registry-server" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883475 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="registry-server" Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.883491 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="dnsmasq-dns" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883500 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="dnsmasq-dns" Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.883520 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="extract-content" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883530 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="extract-content" Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.883574 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="init" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883584 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="init" Jan 23 13:49:35 crc kubenswrapper[4771]: E0123 13:49:35.883597 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="extract-utilities" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883606 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="extract-utilities" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883921 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fac3169-8b55-40d5-8966-a31fd8f7ba7d" containerName="registry-server" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.883949 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e4650e-d2a3-4057-8348-b7ea3a97f439" containerName="dnsmasq-dns" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.884987 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.895148 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q42h4"] Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.954380 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.954675 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cntwb\" (UniqueName: \"kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:35 crc kubenswrapper[4771]: I0123 13:49:35.999209 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lthk" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="registry-server" containerID="cri-o://bbc68e29d18e5420f01046eebe851c9b12357a4df9d801077fce66219adb0ae1" gracePeriod=2 Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.021865 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6997-account-create-update-8m4tt"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.023361 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.028014 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.058009 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cntwb\" (UniqueName: \"kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.058181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.059036 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.071778 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6997-account-create-update-8m4tt"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.094456 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cntwb\" (UniqueName: \"kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb\") pod \"keystone-db-create-q42h4\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.100981 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-w664d"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.102552 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.113901 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w664d"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.166659 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76sz\" (UniqueName: \"kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.166789 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.166886 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.167308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46qw\" (UniqueName: \"kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.183050 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75db-account-create-update-fhlwc"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.184286 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.189355 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.195498 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75db-account-create-update-fhlwc"] Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.212529 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270475 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270564 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76sz\" (UniqueName: \"kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270593 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270711 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fm7\" (UniqueName: \"kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.270737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46qw\" (UniqueName: \"kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.272014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.273029 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.294081 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76sz\" (UniqueName: \"kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz\") pod \"placement-db-create-w664d\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.294135 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46qw\" (UniqueName: \"kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw\") pod \"keystone-6997-account-create-update-8m4tt\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.345507 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.372510 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.372651 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4fm7\" (UniqueName: \"kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.373812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.391048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4fm7\" (UniqueName: \"kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7\") pod \"placement-75db-account-create-update-fhlwc\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.523719 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.527369 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w664d" Jan 23 13:49:36 crc kubenswrapper[4771]: I0123 13:49:36.998118 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:36 crc kubenswrapper[4771]: E0123 13:49:36.998397 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:36 crc kubenswrapper[4771]: E0123 13:49:36.998452 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:36 crc kubenswrapper[4771]: E0123 13:49:36.998523 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:49:44.998499486 +0000 UTC m=+1026.021037121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:37 crc kubenswrapper[4771]: I0123 13:49:37.014679 4771 generic.go:334] "Generic (PLEG): container finished" podID="3b487610-fee5-485d-8034-78634876c316" containerID="bbc68e29d18e5420f01046eebe851c9b12357a4df9d801077fce66219adb0ae1" exitCode=0 Jan 23 13:49:37 crc kubenswrapper[4771]: I0123 13:49:37.014771 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerDied","Data":"bbc68e29d18e5420f01046eebe851c9b12357a4df9d801077fce66219adb0ae1"} Jan 23 13:49:37 crc kubenswrapper[4771]: I0123 13:49:37.342982 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:49:37 crc kubenswrapper[4771]: I0123 13:49:37.343334 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2vq7" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="registry-server" containerID="cri-o://83318bc801f6bfc131edddabcb8846608891c00b1f43889c86ef1ee8ed372a9a" gracePeriod=2 Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.041196 4771 generic.go:334] "Generic (PLEG): container finished" podID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerID="83318bc801f6bfc131edddabcb8846608891c00b1f43889c86ef1ee8ed372a9a" exitCode=0 Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.041266 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerDied","Data":"83318bc801f6bfc131edddabcb8846608891c00b1f43889c86ef1ee8ed372a9a"} Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.075959 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-zngng"] Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.077673 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.086953 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-zngng"] Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.126535 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7t8\" (UniqueName: \"kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.126831 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.167880 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-62ab-account-create-update-fwllw"] Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.169472 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.172579 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.186697 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-62ab-account-create-update-fwllw"] Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.230626 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7t8\" (UniqueName: \"kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.230694 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.230785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8gzf\" (UniqueName: \"kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.230816 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.231953 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.255696 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7t8\" (UniqueName: \"kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8\") pod \"watcher-db-create-zngng\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.333337 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8gzf\" (UniqueName: \"kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.333391 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.334489 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.386190 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8gzf\" (UniqueName: \"kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf\") pod \"watcher-62ab-account-create-update-fwllw\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.406355 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-zngng" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.439194 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.447582 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.496170 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.529761 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.529993 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="dnsmasq-dns" containerID="cri-o://3a0792d437fd829601da985a638f8723eaf333f603842808f670fa7fd50e7998" gracePeriod=10 Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.777199 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 13:49:38 crc kubenswrapper[4771]: I0123 13:49:38.862098 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Jan 23 13:49:39 crc kubenswrapper[4771]: I0123 13:49:39.055707 4771 generic.go:334] "Generic (PLEG): container finished" podID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerID="3a0792d437fd829601da985a638f8723eaf333f603842808f670fa7fd50e7998" exitCode=0 Jan 23 13:49:39 crc kubenswrapper[4771]: I0123 13:49:39.057331 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" event={"ID":"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98","Type":"ContainerDied","Data":"3a0792d437fd829601da985a638f8723eaf333f603842808f670fa7fd50e7998"} Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.480359 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.491793 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.604698 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities\") pod \"77220d49-56d8-4882-a85e-c5772ea35ad1\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.604778 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content\") pod \"3b487610-fee5-485d-8034-78634876c316\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.604889 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncgnz\" (UniqueName: \"kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz\") pod \"3b487610-fee5-485d-8034-78634876c316\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.604941 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content\") pod \"77220d49-56d8-4882-a85e-c5772ea35ad1\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.605039 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities\") pod \"3b487610-fee5-485d-8034-78634876c316\" (UID: \"3b487610-fee5-485d-8034-78634876c316\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.605081 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlbq8\" (UniqueName: \"kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8\") pod \"77220d49-56d8-4882-a85e-c5772ea35ad1\" (UID: \"77220d49-56d8-4882-a85e-c5772ea35ad1\") " Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.609224 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities" (OuterVolumeSpecName: "utilities") pod "3b487610-fee5-485d-8034-78634876c316" (UID: "3b487610-fee5-485d-8034-78634876c316"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.610163 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities" (OuterVolumeSpecName: "utilities") pod "77220d49-56d8-4882-a85e-c5772ea35ad1" (UID: "77220d49-56d8-4882-a85e-c5772ea35ad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.611886 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz" (OuterVolumeSpecName: "kube-api-access-ncgnz") pod "3b487610-fee5-485d-8034-78634876c316" (UID: "3b487610-fee5-485d-8034-78634876c316"). InnerVolumeSpecName "kube-api-access-ncgnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.613560 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8" (OuterVolumeSpecName: "kube-api-access-xlbq8") pod "77220d49-56d8-4882-a85e-c5772ea35ad1" (UID: "77220d49-56d8-4882-a85e-c5772ea35ad1"). InnerVolumeSpecName "kube-api-access-xlbq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.650242 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77220d49-56d8-4882-a85e-c5772ea35ad1" (UID: "77220d49-56d8-4882-a85e-c5772ea35ad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.724545 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.724583 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncgnz\" (UniqueName: \"kubernetes.io/projected/3b487610-fee5-485d-8034-78634876c316-kube-api-access-ncgnz\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.724595 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77220d49-56d8-4882-a85e-c5772ea35ad1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.724607 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.724617 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlbq8\" (UniqueName: \"kubernetes.io/projected/77220d49-56d8-4882-a85e-c5772ea35ad1-kube-api-access-xlbq8\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.841528 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b487610-fee5-485d-8034-78634876c316" (UID: "3b487610-fee5-485d-8034-78634876c316"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.847999 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q42h4"] Jan 23 13:49:41 crc kubenswrapper[4771]: I0123 13:49:41.931916 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b487610-fee5-485d-8034-78634876c316-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.092185 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.154854 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb\") pod \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.154937 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc\") pod \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.154998 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n292p\" (UniqueName: \"kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p\") pod \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.155120 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb\") pod \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.155208 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config\") pod \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\" (UID: \"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98\") " Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.198727 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q42h4" event={"ID":"97e51270-e42d-4da2-bddd-22a2b4c2fa44","Type":"ContainerStarted","Data":"e22038c60c2b73105a67198a0b7fa114d08b19ef43d6b55bee195a3d8e6e8739"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.227101 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2vq7" event={"ID":"77220d49-56d8-4882-a85e-c5772ea35ad1","Type":"ContainerDied","Data":"386cd486e7397b26c04bd4495a0e5a36c7dbdba9627e9e69639c3e529fdabf6f"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.227190 4771 scope.go:117] "RemoveContainer" containerID="83318bc801f6bfc131edddabcb8846608891c00b1f43889c86ef1ee8ed372a9a" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.227482 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2vq7" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.231053 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p" (OuterVolumeSpecName: "kube-api-access-n292p") pod "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" (UID: "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98"). InnerVolumeSpecName "kube-api-access-n292p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.269498 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n292p\" (UniqueName: \"kubernetes.io/projected/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-kube-api-access-n292p\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.273955 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6997-account-create-update-8m4tt"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.285335 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerStarted","Data":"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.306151 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25zc4" event={"ID":"de1618cb-bde8-4c44-846b-aabcbb2e3698","Type":"ContainerStarted","Data":"3037d874e36791e6cfc66704bdb5636c0b86c8799635208a737265d2e378f1e9"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.307520 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w664d"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.310292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" event={"ID":"fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98","Type":"ContainerDied","Data":"1a39622795fcee2480fa11d46e5b0a378fa7f05d8940510d6901929d93fc194c"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.310514 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6656dd9c95-t6pv5" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.315816 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lthk" event={"ID":"3b487610-fee5-485d-8034-78634876c316","Type":"ContainerDied","Data":"a90596e3a0bfe2200823f397a442f9b32ce693c77c30fc9c957ee0370e7aa130"} Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.315950 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lthk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.317280 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" (UID: "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.361811 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.365986 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" (UID: "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.371766 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.371790 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.407730 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75db-account-create-update-fhlwc"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.408822 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" (UID: "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.428068 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-zngng"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.429235 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config" (OuterVolumeSpecName: "config") pod "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" (UID: "fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.430831 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-25zc4" podStartSLOduration=4.630961288 podStartE2EDuration="13.430801263s" podCreationTimestamp="2026-01-23 13:49:29 +0000 UTC" firstStartedPulling="2026-01-23 13:49:32.498855497 +0000 UTC m=+1013.521393122" lastFinishedPulling="2026-01-23 13:49:41.298695472 +0000 UTC m=+1022.321233097" observedRunningTime="2026-01-23 13:49:42.328698242 +0000 UTC m=+1023.351235867" watchObservedRunningTime="2026-01-23 13:49:42.430801263 +0000 UTC m=+1023.453338888" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.473363 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.473400 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.475839 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.532658 4771 scope.go:117] "RemoveContainer" containerID="c6e4a3e7a50f1a1ebb31b38d4c4cdf33b60d371a7a628878d8b11cc79d7d2693" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.562139 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-62ab-account-create-update-fwllw"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.572698 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.572978 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.585894 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lthk"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.592430 4771 scope.go:117] "RemoveContainer" containerID="1c455d8facc4648b27ae811b59a63c060de1c5406c6e81d74784e97d4505c308" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.606603 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.615753 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2vq7"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.644764 4771 scope.go:117] "RemoveContainer" containerID="3a0792d437fd829601da985a638f8723eaf333f603842808f670fa7fd50e7998" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.673485 4771 scope.go:117] "RemoveContainer" containerID="125376e1bff67cbb711f8602b1736c40460d44c57087e5d692b34d9559c8df32" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.674388 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.685436 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6656dd9c95-t6pv5"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.865821 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hgnnk"] Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866296 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="extract-utilities" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866313 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="extract-utilities" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866324 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="init" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866333 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="init" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866353 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866362 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866391 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="extract-utilities" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866400 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="extract-utilities" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866428 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="dnsmasq-dns" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866437 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="dnsmasq-dns" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866446 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866456 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866474 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="extract-content" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866483 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="extract-content" Jan 23 13:49:42 crc kubenswrapper[4771]: E0123 13:49:42.866498 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="extract-content" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866507 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="extract-content" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866712 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" containerName="dnsmasq-dns" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866730 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b487610-fee5-485d-8034-78634876c316" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.866753 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" containerName="registry-server" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.867482 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.871755 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.886123 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knvl\" (UniqueName: \"kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.886182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.888209 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgnnk"] Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.974654 4771 scope.go:117] "RemoveContainer" containerID="bbc68e29d18e5420f01046eebe851c9b12357a4df9d801077fce66219adb0ae1" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.989005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2knvl\" (UniqueName: \"kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.989098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.990268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:42 crc kubenswrapper[4771]: I0123 13:49:42.993005 4771 scope.go:117] "RemoveContainer" containerID="524a79d3a4c9409e1f828a13d6135d17c8b9bff16db66c41802d6f0a58572f59" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.075915 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2knvl\" (UniqueName: \"kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl\") pod \"root-account-create-update-hgnnk\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.112435 4771 scope.go:117] "RemoveContainer" containerID="da6315340567dd403677a536d1f2c88fd41baed94f2905fc2f493db066765754" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.245486 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b487610-fee5-485d-8034-78634876c316" path="/var/lib/kubelet/pods/3b487610-fee5-485d-8034-78634876c316/volumes" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.246220 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77220d49-56d8-4882-a85e-c5772ea35ad1" path="/var/lib/kubelet/pods/77220d49-56d8-4882-a85e-c5772ea35ad1/volumes" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.246865 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98" path="/var/lib/kubelet/pods/fb9a4d0a-9fe6-4b83-bd87-3f5f8eeddf98/volumes" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.269330 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.330721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4fa8367-bad7-4681-93a1-835923d93421","Type":"ContainerStarted","Data":"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.331458 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.333107 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w664d" event={"ID":"5491bbf3-e170-4960-8f02-f9a9c0d5094e","Type":"ContainerStarted","Data":"1174ae97efe1826db4040493ed7839f7f7c36cdcc60867afc449b71511f1a1fc"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.336423 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75db-account-create-update-fhlwc" event={"ID":"6c816062-a5de-4c59-9c07-fa34ad4e8966","Type":"ContainerStarted","Data":"b9a0c36d14348d2e7d9cc5c415f95b5adf28712dae5b541f16859838fccababe"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.338264 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-62ab-account-create-update-fwllw" event={"ID":"891c08fa-6bf5-4df9-b57c-9af771aab285","Type":"ContainerStarted","Data":"122769227c2858419b2b529090401554e0d7f1a7a2050c1597934eec87e92ece"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.345317 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6997-account-create-update-8m4tt" event={"ID":"1a93887d-524c-4c98-b15f-b5370b5b3fb2","Type":"ContainerStarted","Data":"7d5a14df78fe22eeaacc1030a0f4bf464249eb52d9a846c50fb640b65f9700cc"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.345384 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6997-account-create-update-8m4tt" event={"ID":"1a93887d-524c-4c98-b15f-b5370b5b3fb2","Type":"ContainerStarted","Data":"cf3c927136b145408373463905c4f529a5e0f49d55b4861d58ddee11b3f92aae"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.355178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-zngng" event={"ID":"fb5b01d0-80b6-476d-90c7-960f8bcf901b","Type":"ContainerStarted","Data":"d00ba66ded203cc9d511299591ff28fa45bcb84b9fa1827f10429951d7d21404"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.364018 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.695748267 podStartE2EDuration="1m6.363993149s" podCreationTimestamp="2026-01-23 13:48:37 +0000 UTC" firstStartedPulling="2026-01-23 13:48:38.77622717 +0000 UTC m=+959.798764795" lastFinishedPulling="2026-01-23 13:49:41.444472052 +0000 UTC m=+1022.467009677" observedRunningTime="2026-01-23 13:49:43.359303151 +0000 UTC m=+1024.381840786" watchObservedRunningTime="2026-01-23 13:49:43.363993149 +0000 UTC m=+1024.386530764" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.393941 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6997-account-create-update-8m4tt" podStartSLOduration=8.393917833 podStartE2EDuration="8.393917833s" podCreationTimestamp="2026-01-23 13:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:43.384916209 +0000 UTC m=+1024.407453844" watchObservedRunningTime="2026-01-23 13:49:43.393917833 +0000 UTC m=+1024.416455468" Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.399939 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q42h4" event={"ID":"97e51270-e42d-4da2-bddd-22a2b4c2fa44","Type":"ContainerStarted","Data":"a6a147cec4259c145be16e972687ccc3ac25e1b26bdf135e57cb073e0c8b71d0"} Jan 23 13:49:43 crc kubenswrapper[4771]: I0123 13:49:43.426356 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-q42h4" podStartSLOduration=8.426320146 podStartE2EDuration="8.426320146s" podCreationTimestamp="2026-01-23 13:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:49:43.42201585 +0000 UTC m=+1024.444553515" watchObservedRunningTime="2026-01-23 13:49:43.426320146 +0000 UTC m=+1024.448857811" Jan 23 13:49:44 crc kubenswrapper[4771]: W0123 13:49:43.987978 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode62c608f_6582_438d_b155_14350825d03a.slice/crio-2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd WatchSource:0}: Error finding container 2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd: Status 404 returned error can't find the container with id 2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:43.989383 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hgnnk"] Jan 23 13:49:44 crc kubenswrapper[4771]: E0123 13:49:44.312064 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c816062_a5de_4c59_9c07_fa34ad4e8966.slice/crio-f97bc929f4863339e563b135b7e32c424f6ab238a9c712fd4a10a526ab07795c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5b01d0_80b6_476d_90c7_960f8bcf901b.slice/crio-aaac9362f990de0515778582967e45753ed2e06623140898141c3b1004f3e23e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5b01d0_80b6_476d_90c7_960f8bcf901b.slice/crio-conmon-aaac9362f990de0515778582967e45753ed2e06623140898141c3b1004f3e23e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5491bbf3_e170_4960_8f02_f9a9c0d5094e.slice/crio-conmon-cc7489c78cd7f1871f2685c2026230c4f69c44683e19719eae0503bf57f423fe.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.410009 4771 generic.go:334] "Generic (PLEG): container finished" podID="fb5b01d0-80b6-476d-90c7-960f8bcf901b" containerID="aaac9362f990de0515778582967e45753ed2e06623140898141c3b1004f3e23e" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.410634 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-zngng" event={"ID":"fb5b01d0-80b6-476d-90c7-960f8bcf901b","Type":"ContainerDied","Data":"aaac9362f990de0515778582967e45753ed2e06623140898141c3b1004f3e23e"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.412260 4771 generic.go:334] "Generic (PLEG): container finished" podID="97e51270-e42d-4da2-bddd-22a2b4c2fa44" containerID="a6a147cec4259c145be16e972687ccc3ac25e1b26bdf135e57cb073e0c8b71d0" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.412337 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q42h4" event={"ID":"97e51270-e42d-4da2-bddd-22a2b4c2fa44","Type":"ContainerDied","Data":"a6a147cec4259c145be16e972687ccc3ac25e1b26bdf135e57cb073e0c8b71d0"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.413961 4771 generic.go:334] "Generic (PLEG): container finished" podID="5491bbf3-e170-4960-8f02-f9a9c0d5094e" containerID="cc7489c78cd7f1871f2685c2026230c4f69c44683e19719eae0503bf57f423fe" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.414034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w664d" event={"ID":"5491bbf3-e170-4960-8f02-f9a9c0d5094e","Type":"ContainerDied","Data":"cc7489c78cd7f1871f2685c2026230c4f69c44683e19719eae0503bf57f423fe"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.415437 4771 generic.go:334] "Generic (PLEG): container finished" podID="891c08fa-6bf5-4df9-b57c-9af771aab285" containerID="cb4bd0ab677fb1845d3e7152eea904d93b713d0562e6b1ba8d43056f0d70153c" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.415497 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-62ab-account-create-update-fwllw" event={"ID":"891c08fa-6bf5-4df9-b57c-9af771aab285","Type":"ContainerDied","Data":"cb4bd0ab677fb1845d3e7152eea904d93b713d0562e6b1ba8d43056f0d70153c"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.416958 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c816062-a5de-4c59-9c07-fa34ad4e8966" containerID="f97bc929f4863339e563b135b7e32c424f6ab238a9c712fd4a10a526ab07795c" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.417034 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75db-account-create-update-fhlwc" event={"ID":"6c816062-a5de-4c59-9c07-fa34ad4e8966","Type":"ContainerDied","Data":"f97bc929f4863339e563b135b7e32c424f6ab238a9c712fd4a10a526ab07795c"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.429699 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgnnk" event={"ID":"e62c608f-6582-438d-b155-14350825d03a","Type":"ContainerStarted","Data":"2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.439010 4771 generic.go:334] "Generic (PLEG): container finished" podID="1a93887d-524c-4c98-b15f-b5370b5b3fb2" containerID="7d5a14df78fe22eeaacc1030a0f4bf464249eb52d9a846c50fb640b65f9700cc" exitCode=0 Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.439621 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6997-account-create-update-8m4tt" event={"ID":"1a93887d-524c-4c98-b15f-b5370b5b3fb2","Type":"ContainerDied","Data":"7d5a14df78fe22eeaacc1030a0f4bf464249eb52d9a846c50fb640b65f9700cc"} Jan 23 13:49:44 crc kubenswrapper[4771]: I0123 13:49:44.468652 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 13:49:45 crc kubenswrapper[4771]: I0123 13:49:45.036820 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:49:45 crc kubenswrapper[4771]: E0123 13:49:45.037121 4771 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 13:49:45 crc kubenswrapper[4771]: E0123 13:49:45.037174 4771 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 13:49:45 crc kubenswrapper[4771]: E0123 13:49:45.037294 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift podName:cb429d80-3c7c-4014-9a5c-d40256e70014 nodeName:}" failed. No retries permitted until 2026-01-23 13:50:01.037259386 +0000 UTC m=+1042.059797021 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift") pod "swift-storage-0" (UID: "cb429d80-3c7c-4014-9a5c-d40256e70014") : configmap "swift-ring-files" not found Jan 23 13:49:45 crc kubenswrapper[4771]: I0123 13:49:45.450195 4771 generic.go:334] "Generic (PLEG): container finished" podID="e62c608f-6582-438d-b155-14350825d03a" containerID="79dcf3758397b1bbf972a6508c4dedfdd003c8dcb2dc410748fa1f1fe07f6d9b" exitCode=0 Jan 23 13:49:45 crc kubenswrapper[4771]: I0123 13:49:45.450746 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgnnk" event={"ID":"e62c608f-6582-438d-b155-14350825d03a","Type":"ContainerDied","Data":"79dcf3758397b1bbf972a6508c4dedfdd003c8dcb2dc410748fa1f1fe07f6d9b"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.019943 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.161834 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts\") pod \"6c816062-a5de-4c59-9c07-fa34ad4e8966\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.162223 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4fm7\" (UniqueName: \"kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7\") pod \"6c816062-a5de-4c59-9c07-fa34ad4e8966\" (UID: \"6c816062-a5de-4c59-9c07-fa34ad4e8966\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.164325 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c816062-a5de-4c59-9c07-fa34ad4e8966" (UID: "6c816062-a5de-4c59-9c07-fa34ad4e8966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.207711 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7" (OuterVolumeSpecName: "kube-api-access-j4fm7") pod "6c816062-a5de-4c59-9c07-fa34ad4e8966" (UID: "6c816062-a5de-4c59-9c07-fa34ad4e8966"). InnerVolumeSpecName "kube-api-access-j4fm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.265117 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c816062-a5de-4c59-9c07-fa34ad4e8966-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.265159 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4fm7\" (UniqueName: \"kubernetes.io/projected/6c816062-a5de-4c59-9c07-fa34ad4e8966-kube-api-access-j4fm7\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.371904 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.413977 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-zngng" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.418347 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.437942 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w664d" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.468332 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.472480 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cntwb\" (UniqueName: \"kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb\") pod \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.472564 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts\") pod \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\" (UID: \"97e51270-e42d-4da2-bddd-22a2b4c2fa44\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.473843 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97e51270-e42d-4da2-bddd-22a2b4c2fa44" (UID: "97e51270-e42d-4da2-bddd-22a2b4c2fa44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.476766 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb" (OuterVolumeSpecName: "kube-api-access-cntwb") pod "97e51270-e42d-4da2-bddd-22a2b4c2fa44" (UID: "97e51270-e42d-4da2-bddd-22a2b4c2fa44"). InnerVolumeSpecName "kube-api-access-cntwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.487296 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w664d" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.487599 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w664d" event={"ID":"5491bbf3-e170-4960-8f02-f9a9c0d5094e","Type":"ContainerDied","Data":"1174ae97efe1826db4040493ed7839f7f7c36cdcc60867afc449b71511f1a1fc"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.487670 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1174ae97efe1826db4040493ed7839f7f7c36cdcc60867afc449b71511f1a1fc" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.492199 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-62ab-account-create-update-fwllw" event={"ID":"891c08fa-6bf5-4df9-b57c-9af771aab285","Type":"ContainerDied","Data":"122769227c2858419b2b529090401554e0d7f1a7a2050c1597934eec87e92ece"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.492244 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122769227c2858419b2b529090401554e0d7f1a7a2050c1597934eec87e92ece" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.492336 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-62ab-account-create-update-fwllw" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.496085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75db-account-create-update-fhlwc" event={"ID":"6c816062-a5de-4c59-9c07-fa34ad4e8966","Type":"ContainerDied","Data":"b9a0c36d14348d2e7d9cc5c415f95b5adf28712dae5b541f16859838fccababe"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.496113 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a0c36d14348d2e7d9cc5c415f95b5adf28712dae5b541f16859838fccababe" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.496118 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75db-account-create-update-fhlwc" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.501524 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6997-account-create-update-8m4tt" event={"ID":"1a93887d-524c-4c98-b15f-b5370b5b3fb2","Type":"ContainerDied","Data":"cf3c927136b145408373463905c4f529a5e0f49d55b4861d58ddee11b3f92aae"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.501579 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf3c927136b145408373463905c4f529a5e0f49d55b4861d58ddee11b3f92aae" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.501662 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6997-account-create-update-8m4tt" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.510537 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerStarted","Data":"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.514557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-zngng" event={"ID":"fb5b01d0-80b6-476d-90c7-960f8bcf901b","Type":"ContainerDied","Data":"d00ba66ded203cc9d511299591ff28fa45bcb84b9fa1827f10429951d7d21404"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.514615 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00ba66ded203cc9d511299591ff28fa45bcb84b9fa1827f10429951d7d21404" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.514694 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-zngng" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.520183 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q42h4" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.520221 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q42h4" event={"ID":"97e51270-e42d-4da2-bddd-22a2b4c2fa44","Type":"ContainerDied","Data":"e22038c60c2b73105a67198a0b7fa114d08b19ef43d6b55bee195a3d8e6e8739"} Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.520241 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22038c60c2b73105a67198a0b7fa114d08b19ef43d6b55bee195a3d8e6e8739" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575054 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts\") pod \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575109 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7t8\" (UniqueName: \"kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8\") pod \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575312 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts\") pod \"891c08fa-6bf5-4df9-b57c-9af771aab285\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575647 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a93887d-524c-4c98-b15f-b5370b5b3fb2" (UID: "1a93887d-524c-4c98-b15f-b5370b5b3fb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575812 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t76sz\" (UniqueName: \"kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz\") pod \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575869 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts\") pod \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\" (UID: \"5491bbf3-e170-4960-8f02-f9a9c0d5094e\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575882 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "891c08fa-6bf5-4df9-b57c-9af771aab285" (UID: "891c08fa-6bf5-4df9-b57c-9af771aab285"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575907 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46qw\" (UniqueName: \"kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw\") pod \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\" (UID: \"1a93887d-524c-4c98-b15f-b5370b5b3fb2\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.575933 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts\") pod \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\" (UID: \"fb5b01d0-80b6-476d-90c7-960f8bcf901b\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.576058 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8gzf\" (UniqueName: \"kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf\") pod \"891c08fa-6bf5-4df9-b57c-9af771aab285\" (UID: \"891c08fa-6bf5-4df9-b57c-9af771aab285\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577066 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c08fa-6bf5-4df9-b57c-9af771aab285-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577088 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cntwb\" (UniqueName: \"kubernetes.io/projected/97e51270-e42d-4da2-bddd-22a2b4c2fa44-kube-api-access-cntwb\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577101 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e51270-e42d-4da2-bddd-22a2b4c2fa44-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577113 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a93887d-524c-4c98-b15f-b5370b5b3fb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5491bbf3-e170-4960-8f02-f9a9c0d5094e" (UID: "5491bbf3-e170-4960-8f02-f9a9c0d5094e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.577611 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb5b01d0-80b6-476d-90c7-960f8bcf901b" (UID: "fb5b01d0-80b6-476d-90c7-960f8bcf901b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.580360 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8" (OuterVolumeSpecName: "kube-api-access-nz7t8") pod "fb5b01d0-80b6-476d-90c7-960f8bcf901b" (UID: "fb5b01d0-80b6-476d-90c7-960f8bcf901b"). InnerVolumeSpecName "kube-api-access-nz7t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.581465 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz" (OuterVolumeSpecName: "kube-api-access-t76sz") pod "5491bbf3-e170-4960-8f02-f9a9c0d5094e" (UID: "5491bbf3-e170-4960-8f02-f9a9c0d5094e"). InnerVolumeSpecName "kube-api-access-t76sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.581889 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf" (OuterVolumeSpecName: "kube-api-access-g8gzf") pod "891c08fa-6bf5-4df9-b57c-9af771aab285" (UID: "891c08fa-6bf5-4df9-b57c-9af771aab285"). InnerVolumeSpecName "kube-api-access-g8gzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.582378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw" (OuterVolumeSpecName: "kube-api-access-s46qw") pod "1a93887d-524c-4c98-b15f-b5370b5b3fb2" (UID: "1a93887d-524c-4c98-b15f-b5370b5b3fb2"). InnerVolumeSpecName "kube-api-access-s46qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680434 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7t8\" (UniqueName: \"kubernetes.io/projected/fb5b01d0-80b6-476d-90c7-960f8bcf901b-kube-api-access-nz7t8\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680480 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t76sz\" (UniqueName: \"kubernetes.io/projected/5491bbf3-e170-4960-8f02-f9a9c0d5094e-kube-api-access-t76sz\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680491 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5491bbf3-e170-4960-8f02-f9a9c0d5094e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680501 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5b01d0-80b6-476d-90c7-960f8bcf901b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680516 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46qw\" (UniqueName: \"kubernetes.io/projected/1a93887d-524c-4c98-b15f-b5370b5b3fb2-kube-api-access-s46qw\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.680525 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8gzf\" (UniqueName: \"kubernetes.io/projected/891c08fa-6bf5-4df9-b57c-9af771aab285-kube-api-access-g8gzf\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.805959 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.884676 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2knvl\" (UniqueName: \"kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl\") pod \"e62c608f-6582-438d-b155-14350825d03a\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.885184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts\") pod \"e62c608f-6582-438d-b155-14350825d03a\" (UID: \"e62c608f-6582-438d-b155-14350825d03a\") " Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.885971 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62c608f-6582-438d-b155-14350825d03a" (UID: "e62c608f-6582-438d-b155-14350825d03a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.886227 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c608f-6582-438d-b155-14350825d03a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.889371 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl" (OuterVolumeSpecName: "kube-api-access-2knvl") pod "e62c608f-6582-438d-b155-14350825d03a" (UID: "e62c608f-6582-438d-b155-14350825d03a"). InnerVolumeSpecName "kube-api-access-2knvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:46 crc kubenswrapper[4771]: I0123 13:49:46.988931 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2knvl\" (UniqueName: \"kubernetes.io/projected/e62c608f-6582-438d-b155-14350825d03a-kube-api-access-2knvl\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:47 crc kubenswrapper[4771]: I0123 13:49:47.542561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hgnnk" event={"ID":"e62c608f-6582-438d-b155-14350825d03a","Type":"ContainerDied","Data":"2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd"} Jan 23 13:49:47 crc kubenswrapper[4771]: I0123 13:49:47.542618 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c9413e24a9cfd7d0f50f5f6988af2159b1c53a10cc58a1a0484d4f4f8cd9ebd" Jan 23 13:49:47 crc kubenswrapper[4771]: I0123 13:49:47.542696 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hgnnk" Jan 23 13:49:47 crc kubenswrapper[4771]: I0123 13:49:47.969595 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 13:49:48 crc kubenswrapper[4771]: I0123 13:49:48.554804 4771 generic.go:334] "Generic (PLEG): container finished" podID="add41260-19c8-4989-a0a9-97a93316c6e8" containerID="e4acc9393208e4d49c802e4a57dd208b07d7085cb8b380720b50af01eee3b5dc" exitCode=0 Jan 23 13:49:48 crc kubenswrapper[4771]: I0123 13:49:48.554889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"add41260-19c8-4989-a0a9-97a93316c6e8","Type":"ContainerDied","Data":"e4acc9393208e4d49c802e4a57dd208b07d7085cb8b380720b50af01eee3b5dc"} Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.296964 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hgnnk"] Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.310090 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hgnnk"] Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.570862 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerStarted","Data":"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244"} Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.572759 4771 generic.go:334] "Generic (PLEG): container finished" podID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerID="7d99e1c86e9e4ecf377470ad86e9ca700b7f9077628c6f89ad3b8868b41a0180" exitCode=0 Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.572787 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerDied","Data":"7d99e1c86e9e4ecf377470ad86e9ca700b7f9077628c6f89ad3b8868b41a0180"} Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.575135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"add41260-19c8-4989-a0a9-97a93316c6e8","Type":"ContainerStarted","Data":"fedc9c55f435f3b70a55f105e682b1e775ef14fa4e73f1fdc22ad477c4c9b51e"} Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.575366 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.659933 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=36.279570113 podStartE2EDuration="1m12.659900656s" podCreationTimestamp="2026-01-23 13:48:37 +0000 UTC" firstStartedPulling="2026-01-23 13:49:12.802535481 +0000 UTC m=+993.825073106" lastFinishedPulling="2026-01-23 13:49:49.182866024 +0000 UTC m=+1030.205403649" observedRunningTime="2026-01-23 13:49:49.623050933 +0000 UTC m=+1030.645588578" watchObservedRunningTime="2026-01-23 13:49:49.659900656 +0000 UTC m=+1030.682438281" Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.661145 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=43.252402059 podStartE2EDuration="1m17.661136776s" podCreationTimestamp="2026-01-23 13:48:32 +0000 UTC" firstStartedPulling="2026-01-23 13:48:35.707972317 +0000 UTC m=+956.730509942" lastFinishedPulling="2026-01-23 13:49:10.116707034 +0000 UTC m=+991.139244659" observedRunningTime="2026-01-23 13:49:49.657620535 +0000 UTC m=+1030.680158160" watchObservedRunningTime="2026-01-23 13:49:49.661136776 +0000 UTC m=+1030.683674401" Jan 23 13:49:49 crc kubenswrapper[4771]: I0123 13:49:49.686099 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:50 crc kubenswrapper[4771]: I0123 13:49:50.641191 4771 generic.go:334] "Generic (PLEG): container finished" podID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerID="25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20" exitCode=0 Jan 23 13:49:50 crc kubenswrapper[4771]: I0123 13:49:50.641280 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerDied","Data":"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20"} Jan 23 13:49:50 crc kubenswrapper[4771]: I0123 13:49:50.650055 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerStarted","Data":"a36f014bb98d18e4806f92bca7540402079eeed1db8d1ce47ce3b311ac3a02e2"} Jan 23 13:49:50 crc kubenswrapper[4771]: I0123 13:49:50.713152 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.950354434 podStartE2EDuration="1m19.71313118s" podCreationTimestamp="2026-01-23 13:48:31 +0000 UTC" firstStartedPulling="2026-01-23 13:48:33.633471429 +0000 UTC m=+954.656009054" lastFinishedPulling="2026-01-23 13:49:10.396248175 +0000 UTC m=+991.418785800" observedRunningTime="2026-01-23 13:49:50.708508033 +0000 UTC m=+1031.731045678" watchObservedRunningTime="2026-01-23 13:49:50.71313118 +0000 UTC m=+1031.735668805" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.240426 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62c608f-6582-438d-b155-14350825d03a" path="/var/lib/kubelet/pods/e62c608f-6582-438d-b155-14350825d03a/volumes" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.535857 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nxbfr" podUID="686807bb-241a-4fdb-bca8-0eba0745aed1" containerName="ovn-controller" probeResult="failure" output=< Jan 23 13:49:51 crc kubenswrapper[4771]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 13:49:51 crc kubenswrapper[4771]: > Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.583070 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.587628 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7txgd" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.659730 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerStarted","Data":"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55"} Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.660043 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.661012 4771 generic.go:334] "Generic (PLEG): container finished" podID="de1618cb-bde8-4c44-846b-aabcbb2e3698" containerID="3037d874e36791e6cfc66704bdb5636c0b86c8799635208a737265d2e378f1e9" exitCode=0 Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.661129 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25zc4" event={"ID":"de1618cb-bde8-4c44-846b-aabcbb2e3698","Type":"ContainerDied","Data":"3037d874e36791e6cfc66704bdb5636c0b86c8799635208a737265d2e378f1e9"} Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.691324 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.647496766 podStartE2EDuration="1m20.691299974s" podCreationTimestamp="2026-01-23 13:48:31 +0000 UTC" firstStartedPulling="2026-01-23 13:48:34.126779586 +0000 UTC m=+955.149317211" lastFinishedPulling="2026-01-23 13:49:10.170582794 +0000 UTC m=+991.193120419" observedRunningTime="2026-01-23 13:49:51.681072851 +0000 UTC m=+1032.703610486" watchObservedRunningTime="2026-01-23 13:49:51.691299974 +0000 UTC m=+1032.713837599" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.864238 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nxbfr-config-lghhb"] Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866577 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c608f-6582-438d-b155-14350825d03a" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866607 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c608f-6582-438d-b155-14350825d03a" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866626 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e51270-e42d-4da2-bddd-22a2b4c2fa44" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866636 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e51270-e42d-4da2-bddd-22a2b4c2fa44" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866662 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5b01d0-80b6-476d-90c7-960f8bcf901b" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866671 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5b01d0-80b6-476d-90c7-960f8bcf901b" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866687 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5491bbf3-e170-4960-8f02-f9a9c0d5094e" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866694 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5491bbf3-e170-4960-8f02-f9a9c0d5094e" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866709 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c816062-a5de-4c59-9c07-fa34ad4e8966" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866716 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c816062-a5de-4c59-9c07-fa34ad4e8966" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866728 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a93887d-524c-4c98-b15f-b5370b5b3fb2" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866735 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a93887d-524c-4c98-b15f-b5370b5b3fb2" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: E0123 13:49:51.866754 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891c08fa-6bf5-4df9-b57c-9af771aab285" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866761 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="891c08fa-6bf5-4df9-b57c-9af771aab285" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866959 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="891c08fa-6bf5-4df9-b57c-9af771aab285" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866975 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a93887d-524c-4c98-b15f-b5370b5b3fb2" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.866990 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5491bbf3-e170-4960-8f02-f9a9c0d5094e" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.867006 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c816062-a5de-4c59-9c07-fa34ad4e8966" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.867018 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e51270-e42d-4da2-bddd-22a2b4c2fa44" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.867029 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62c608f-6582-438d-b155-14350825d03a" containerName="mariadb-account-create-update" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.867041 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5b01d0-80b6-476d-90c7-960f8bcf901b" containerName="mariadb-database-create" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.867830 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.870227 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 13:49:51 crc kubenswrapper[4771]: I0123 13:49:51.889828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr-config-lghhb"] Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.015766 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.015877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7kg\" (UniqueName: \"kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.015935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.015961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.016036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.016103 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.117940 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng7kg\" (UniqueName: \"kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118035 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118585 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118627 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118521 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.118802 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.119048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.119186 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.119664 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.121241 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.146849 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng7kg\" (UniqueName: \"kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg\") pod \"ovn-controller-nxbfr-config-lghhb\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.187503 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.676388 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 13:49:52 crc kubenswrapper[4771]: W0123 13:49:52.693677 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf65c10a_3184_4454_aff5_4d5fc32f7515.slice/crio-b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db WatchSource:0}: Error finding container b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db: Status 404 returned error can't find the container with id b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.696644 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr-config-lghhb"] Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.917814 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-56zpw"] Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.919698 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.922054 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 13:49:52 crc kubenswrapper[4771]: I0123 13:49:52.941436 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-56zpw"] Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.042993 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.043208 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5khx\" (UniqueName: \"kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.057374 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145112 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145217 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145403 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145451 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145520 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145625 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.145779 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8nx6\" (UniqueName: \"kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6\") pod \"de1618cb-bde8-4c44-846b-aabcbb2e3698\" (UID: \"de1618cb-bde8-4c44-846b-aabcbb2e3698\") " Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.146840 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.147093 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.147921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5khx\" (UniqueName: \"kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.148177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.148273 4771 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de1618cb-bde8-4c44-846b-aabcbb2e3698-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.148290 4771 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.149274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.163764 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6" (OuterVolumeSpecName: "kube-api-access-c8nx6") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "kube-api-access-c8nx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.174704 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.180695 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5khx\" (UniqueName: \"kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx\") pod \"root-account-create-update-56zpw\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.208099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.208440 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.221824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts" (OuterVolumeSpecName: "scripts") pod "de1618cb-bde8-4c44-846b-aabcbb2e3698" (UID: "de1618cb-bde8-4c44-846b-aabcbb2e3698"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.246528 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.250127 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8nx6\" (UniqueName: \"kubernetes.io/projected/de1618cb-bde8-4c44-846b-aabcbb2e3698-kube-api-access-c8nx6\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.250174 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1618cb-bde8-4c44-846b-aabcbb2e3698-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.250183 4771 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.250193 4771 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.250203 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1618cb-bde8-4c44-846b-aabcbb2e3698-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.681906 4771 generic.go:334] "Generic (PLEG): container finished" podID="af65c10a-3184-4454-aff5-4d5fc32f7515" containerID="88e8d3ab56095979828ea7c6448d978a8721a8d3bdd07f7f37ffb05bfae28a8e" exitCode=0 Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.682015 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-lghhb" event={"ID":"af65c10a-3184-4454-aff5-4d5fc32f7515","Type":"ContainerDied","Data":"88e8d3ab56095979828ea7c6448d978a8721a8d3bdd07f7f37ffb05bfae28a8e"} Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.682814 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-lghhb" event={"ID":"af65c10a-3184-4454-aff5-4d5fc32f7515","Type":"ContainerStarted","Data":"b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db"} Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.685937 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-25zc4" event={"ID":"de1618cb-bde8-4c44-846b-aabcbb2e3698","Type":"ContainerDied","Data":"60f76967e8ae606599c09f90c9cc473bec3d20f31a316ddaf8586b2362899a95"} Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.686004 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f76967e8ae606599c09f90c9cc473bec3d20f31a316ddaf8586b2362899a95" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.686068 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-25zc4" Jan 23 13:49:53 crc kubenswrapper[4771]: I0123 13:49:53.838592 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-56zpw"] Jan 23 13:49:53 crc kubenswrapper[4771]: W0123 13:49:53.841085 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3411c00_1505_4570_81d3_25b9a9d308e7.slice/crio-9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67 WatchSource:0}: Error finding container 9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67: Status 404 returned error can't find the container with id 9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67 Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.685304 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.688674 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.700592 4771 generic.go:334] "Generic (PLEG): container finished" podID="c3411c00-1505-4570-81d3-25b9a9d308e7" containerID="162e65c46d4a60b1e4293bdde905464abe77cc1e8bbef0b366bb479371a11a98" exitCode=0 Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.700910 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-56zpw" event={"ID":"c3411c00-1505-4570-81d3-25b9a9d308e7","Type":"ContainerDied","Data":"162e65c46d4a60b1e4293bdde905464abe77cc1e8bbef0b366bb479371a11a98"} Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.700950 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-56zpw" event={"ID":"c3411c00-1505-4570-81d3-25b9a9d308e7","Type":"ContainerStarted","Data":"9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67"} Jan 23 13:49:54 crc kubenswrapper[4771]: I0123 13:49:54.703517 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.108537 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.192754 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.193485 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.193641 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.193702 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.193821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng7kg\" (UniqueName: \"kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.193891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn\") pod \"af65c10a-3184-4454-aff5-4d5fc32f7515\" (UID: \"af65c10a-3184-4454-aff5-4d5fc32f7515\") " Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.194936 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.195002 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run" (OuterVolumeSpecName: "var-run") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.195028 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.196245 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.196682 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts" (OuterVolumeSpecName: "scripts") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.206383 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg" (OuterVolumeSpecName: "kube-api-access-ng7kg") pod "af65c10a-3184-4454-aff5-4d5fc32f7515" (UID: "af65c10a-3184-4454-aff5-4d5fc32f7515"). InnerVolumeSpecName "kube-api-access-ng7kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296512 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296557 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296568 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/af65c10a-3184-4454-aff5-4d5fc32f7515-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296579 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296588 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/af65c10a-3184-4454-aff5-4d5fc32f7515-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.296597 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng7kg\" (UniqueName: \"kubernetes.io/projected/af65c10a-3184-4454-aff5-4d5fc32f7515-kube-api-access-ng7kg\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.713607 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-lghhb" event={"ID":"af65c10a-3184-4454-aff5-4d5fc32f7515","Type":"ContainerDied","Data":"b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db"} Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.713689 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b739c5f62cb9a22068e2e176526ac3248806ceaf941afe9ac8db9148d3fff8db" Jan 23 13:49:55 crc kubenswrapper[4771]: I0123 13:49:55.713682 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-lghhb" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.172235 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.257598 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nxbfr-config-lghhb"] Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.265631 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nxbfr-config-lghhb"] Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.318667 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts\") pod \"c3411c00-1505-4570-81d3-25b9a9d308e7\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.318755 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5khx\" (UniqueName: \"kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx\") pod \"c3411c00-1505-4570-81d3-25b9a9d308e7\" (UID: \"c3411c00-1505-4570-81d3-25b9a9d308e7\") " Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.320616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3411c00-1505-4570-81d3-25b9a9d308e7" (UID: "c3411c00-1505-4570-81d3-25b9a9d308e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.325558 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx" (OuterVolumeSpecName: "kube-api-access-s5khx") pod "c3411c00-1505-4570-81d3-25b9a9d308e7" (UID: "c3411c00-1505-4570-81d3-25b9a9d308e7"). InnerVolumeSpecName "kube-api-access-s5khx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.422246 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3411c00-1505-4570-81d3-25b9a9d308e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.422296 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5khx\" (UniqueName: \"kubernetes.io/projected/c3411c00-1505-4570-81d3-25b9a9d308e7-kube-api-access-s5khx\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.439534 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nxbfr-config-2k59n"] Jan 23 13:49:56 crc kubenswrapper[4771]: E0123 13:49:56.440134 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af65c10a-3184-4454-aff5-4d5fc32f7515" containerName="ovn-config" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440186 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="af65c10a-3184-4454-aff5-4d5fc32f7515" containerName="ovn-config" Jan 23 13:49:56 crc kubenswrapper[4771]: E0123 13:49:56.440206 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3411c00-1505-4570-81d3-25b9a9d308e7" containerName="mariadb-account-create-update" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440216 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3411c00-1505-4570-81d3-25b9a9d308e7" containerName="mariadb-account-create-update" Jan 23 13:49:56 crc kubenswrapper[4771]: E0123 13:49:56.440255 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1618cb-bde8-4c44-846b-aabcbb2e3698" containerName="swift-ring-rebalance" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440265 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1618cb-bde8-4c44-846b-aabcbb2e3698" containerName="swift-ring-rebalance" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440526 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="af65c10a-3184-4454-aff5-4d5fc32f7515" containerName="ovn-config" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440553 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1618cb-bde8-4c44-846b-aabcbb2e3698" containerName="swift-ring-rebalance" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.440572 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3411c00-1505-4570-81d3-25b9a9d308e7" containerName="mariadb-account-create-update" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.441633 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.445208 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.452627 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr-config-2k59n"] Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524517 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524642 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524665 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524709 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524735 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.524896 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjmj\" (UniqueName: \"kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.627618 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.627729 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.628104 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.628114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.628191 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.630813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.631093 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.631203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.631364 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjjmj\" (UniqueName: \"kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.632145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.632769 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.654027 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjjmj\" (UniqueName: \"kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj\") pod \"ovn-controller-nxbfr-config-2k59n\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.721166 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.724665 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-56zpw" event={"ID":"c3411c00-1505-4570-81d3-25b9a9d308e7","Type":"ContainerDied","Data":"9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67"} Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.724682 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-56zpw" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.724707 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ad1f5824ee500e2e592c5e3d59c4928ab940e14791a8dda4592a250529dea67" Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.725759 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="prometheus" containerID="cri-o://880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff" gracePeriod=600 Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.726228 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="thanos-sidecar" containerID="cri-o://b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244" gracePeriod=600 Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.726285 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="config-reloader" containerID="cri-o://a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd" gracePeriod=600 Jan 23 13:49:56 crc kubenswrapper[4771]: I0123 13:49:56.758271 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.446129 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af65c10a-3184-4454-aff5-4d5fc32f7515" path="/var/lib/kubelet/pods/af65c10a-3184-4454-aff5-4d5fc32f7515/volumes" Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.447085 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nxbfr" Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.679941 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nxbfr-config-2k59n"] Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.766437 4771 generic.go:334] "Generic (PLEG): container finished" podID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerID="b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244" exitCode=0 Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.766859 4771 generic.go:334] "Generic (PLEG): container finished" podID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerID="880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff" exitCode=0 Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.766596 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerDied","Data":"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244"} Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.766987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerDied","Data":"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff"} Jan 23 13:49:57 crc kubenswrapper[4771]: I0123 13:49:57.769698 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-2k59n" event={"ID":"8f26e5a5-124b-4867-8e98-42b6b5843541","Type":"ContainerStarted","Data":"e2ce009428f80490f6cd1fa5b59cc6f9f38ad3b003ffa507b94c5eec02e689ef"} Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.250786 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.379833 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.379909 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.379939 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.379990 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380113 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jcrd\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380198 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380287 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"eb8a3435-994c-4d4d-aefa-2e60577378cf\" (UID: \"eb8a3435-994c-4d4d-aefa-2e60577378cf\") " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.380933 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.381018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.381261 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.387948 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out" (OuterVolumeSpecName: "config-out") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.390977 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.392330 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config" (OuterVolumeSpecName: "config") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.395625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd" (OuterVolumeSpecName: "kube-api-access-9jcrd") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "kube-api-access-9jcrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.397148 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.420108 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config" (OuterVolumeSpecName: "web-config") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.421978 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "eb8a3435-994c-4d4d-aefa-2e60577378cf" (UID: "eb8a3435-994c-4d4d-aefa-2e60577378cf"). InnerVolumeSpecName "pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.482441 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483291 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483348 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/eb8a3435-994c-4d4d-aefa-2e60577378cf-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483365 4771 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/eb8a3435-994c-4d4d-aefa-2e60577378cf-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483381 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jcrd\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-kube-api-access-9jcrd\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483400 4771 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483426 4771 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/eb8a3435-994c-4d4d-aefa-2e60577378cf-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483474 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") on node \"crc\" " Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483491 4771 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.483505 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb8a3435-994c-4d4d-aefa-2e60577378cf-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.503872 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.504055 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310") on node "crc" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.585695 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") on node \"crc\" DevicePath \"\"" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.875910 4771 generic.go:334] "Generic (PLEG): container finished" podID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerID="a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd" exitCode=0 Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.876263 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.878100 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerDied","Data":"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd"} Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.878157 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"eb8a3435-994c-4d4d-aefa-2e60577378cf","Type":"ContainerDied","Data":"0cfa971fe1a49d0ba9a7b6d3acb3e0e94477a2e6702092f14514716c6c5cf98f"} Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.878199 4771 scope.go:117] "RemoveContainer" containerID="b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.895549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-2k59n" event={"ID":"8f26e5a5-124b-4867-8e98-42b6b5843541","Type":"ContainerDied","Data":"f130420f5f38dd7ff3309ed30779e51ac68ec4c51ff6ab2a8311f32b27afd9d1"} Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.895558 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f26e5a5-124b-4867-8e98-42b6b5843541" containerID="f130420f5f38dd7ff3309ed30779e51ac68ec4c51ff6ab2a8311f32b27afd9d1" exitCode=0 Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.927205 4771 scope.go:117] "RemoveContainer" containerID="a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.970458 4771 scope.go:117] "RemoveContainer" containerID="880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff" Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.973494 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.990799 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:58 crc kubenswrapper[4771]: I0123 13:49:58.991040 4771 scope.go:117] "RemoveContainer" containerID="252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.013457 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.013980 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="prometheus" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.013999 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="prometheus" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.014018 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="init-config-reloader" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014025 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="init-config-reloader" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.014039 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="thanos-sidecar" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014046 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="thanos-sidecar" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.014057 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="config-reloader" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014067 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="config-reloader" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014247 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="prometheus" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014273 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="config-reloader" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.014287 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" containerName="thanos-sidecar" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.016312 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.021867 4771 scope.go:117] "RemoveContainer" containerID="b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.025916 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.026159 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.026352 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.026519 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.026713 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.026722 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.026870 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244\": container with ID starting with b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244 not found: ID does not exist" containerID="b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.027037 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244"} err="failed to get container status \"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244\": rpc error: code = NotFound desc = could not find container \"b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244\": container with ID starting with b0a380d3eaccdd68d2819cf6afa701c6e9ee9469cf8631613dda658bd76b6244 not found: ID does not exist" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.027070 4771 scope.go:117] "RemoveContainer" containerID="a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.027271 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.027717 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qkchd" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.031821 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.032222 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd\": container with ID starting with a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd not found: ID does not exist" containerID="a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.032359 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd"} err="failed to get container status \"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd\": rpc error: code = NotFound desc = could not find container \"a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd\": container with ID starting with a5aa1970040d72cfc27b72d15e70b2b35eda946b6fc910fc3e30c24e2613dffd not found: ID does not exist" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.032495 4771 scope.go:117] "RemoveContainer" containerID="880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.033011 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff\": container with ID starting with 880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff not found: ID does not exist" containerID="880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.033061 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff"} err="failed to get container status \"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff\": rpc error: code = NotFound desc = could not find container \"880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff\": container with ID starting with 880e71abd99a03f50d75a261829fb456dc8e570f589aee8e91867ac3acbc92ff not found: ID does not exist" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.033101 4771 scope.go:117] "RemoveContainer" containerID="252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c" Jan 23 13:49:59 crc kubenswrapper[4771]: E0123 13:49:59.033422 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c\": container with ID starting with 252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c not found: ID does not exist" containerID="252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.033447 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c"} err="failed to get container status \"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c\": rpc error: code = NotFound desc = could not find container \"252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c\": container with ID starting with 252accaa0deadf6b8247d36cfe1e92b430e42fd47e5284aa6bacb37ad46e768c not found: ID does not exist" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.041583 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.208347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn8nx\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.208821 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.208922 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209137 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209220 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209340 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209441 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209555 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.209850 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.210011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.210055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.243044 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8a3435-994c-4d4d-aefa-2e60577378cf" path="/var/lib/kubelet/pods/eb8a3435-994c-4d4d-aefa-2e60577378cf/volumes" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.287160 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-56zpw"] Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.294931 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-56zpw"] Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312816 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312860 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312922 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.312953 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313110 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313208 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn8nx\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313342 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313757 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313798 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.313832 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.314801 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.316162 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.316809 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.318737 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.318774 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fedd087c020fedaa53662fb68cb2c644ee54851c0d7a037bd330262bcce6f5b4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.319949 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.320072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.320795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.321374 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.321929 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.322927 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.323969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.334714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.339701 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn8nx\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.365551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.378026 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.855840 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 13:49:59 crc kubenswrapper[4771]: W0123 13:49:59.868694 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6c312ce_f6df_4617_ba37_6675897fa368.slice/crio-1bd00ff525f7e65d435572146c06dac45466e8c832f5db5fe4c2c5b8e41c38af WatchSource:0}: Error finding container 1bd00ff525f7e65d435572146c06dac45466e8c832f5db5fe4c2c5b8e41c38af: Status 404 returned error can't find the container with id 1bd00ff525f7e65d435572146c06dac45466e8c832f5db5fe4c2c5b8e41c38af Jan 23 13:49:59 crc kubenswrapper[4771]: I0123 13:49:59.916659 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerStarted","Data":"1bd00ff525f7e65d435572146c06dac45466e8c832f5db5fe4c2c5b8e41c38af"} Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.269216 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.432860 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433089 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run" (OuterVolumeSpecName: "var-run") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433351 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjjmj\" (UniqueName: \"kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433621 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433680 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433704 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn\") pod \"8f26e5a5-124b-4867-8e98-42b6b5843541\" (UID: \"8f26e5a5-124b-4867-8e98-42b6b5843541\") " Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433821 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.433863 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.434220 4771 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.434243 4771 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.434262 4771 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8f26e5a5-124b-4867-8e98-42b6b5843541-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.434734 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.435013 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts" (OuterVolumeSpecName: "scripts") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.440895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj" (OuterVolumeSpecName: "kube-api-access-tjjmj") pod "8f26e5a5-124b-4867-8e98-42b6b5843541" (UID: "8f26e5a5-124b-4867-8e98-42b6b5843541"). InnerVolumeSpecName "kube-api-access-tjjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.537133 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.537193 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjjmj\" (UniqueName: \"kubernetes.io/projected/8f26e5a5-124b-4867-8e98-42b6b5843541-kube-api-access-tjjmj\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.537213 4771 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8f26e5a5-124b-4867-8e98-42b6b5843541-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.934053 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nxbfr-config-2k59n" event={"ID":"8f26e5a5-124b-4867-8e98-42b6b5843541","Type":"ContainerDied","Data":"e2ce009428f80490f6cd1fa5b59cc6f9f38ad3b003ffa507b94c5eec02e689ef"} Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.934119 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2ce009428f80490f6cd1fa5b59cc6f9f38ad3b003ffa507b94c5eec02e689ef" Jan 23 13:50:00 crc kubenswrapper[4771]: I0123 13:50:00.934159 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nxbfr-config-2k59n" Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.047897 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.067816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb429d80-3c7c-4014-9a5c-d40256e70014-etc-swift\") pod \"swift-storage-0\" (UID: \"cb429d80-3c7c-4014-9a5c-d40256e70014\") " pod="openstack/swift-storage-0" Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.240511 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3411c00-1505-4570-81d3-25b9a9d308e7" path="/var/lib/kubelet/pods/c3411c00-1505-4570-81d3-25b9a9d308e7/volumes" Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.285244 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.395882 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nxbfr-config-2k59n"] Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.424723 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nxbfr-config-2k59n"] Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.916039 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 13:50:01 crc kubenswrapper[4771]: W0123 13:50:01.922514 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb429d80_3c7c_4014_9a5c_d40256e70014.slice/crio-d4864fa8d4b7fd5ff968ed57e5398994d8bbd0a25d14f25b82032b0298709923 WatchSource:0}: Error finding container d4864fa8d4b7fd5ff968ed57e5398994d8bbd0a25d14f25b82032b0298709923: Status 404 returned error can't find the container with id d4864fa8d4b7fd5ff968ed57e5398994d8bbd0a25d14f25b82032b0298709923 Jan 23 13:50:01 crc kubenswrapper[4771]: I0123 13:50:01.943680 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"d4864fa8d4b7fd5ff968ed57e5398994d8bbd0a25d14f25b82032b0298709923"} Jan 23 13:50:02 crc kubenswrapper[4771]: I0123 13:50:02.677570 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Jan 23 13:50:02 crc kubenswrapper[4771]: I0123 13:50:02.976691 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"8619255c22d0fa1eeeebf7c90eaa3b8d0055dd799cd89284453af6b0aaa926c0"} Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.241768 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f26e5a5-124b-4867-8e98-42b6b5843541" path="/var/lib/kubelet/pods/8f26e5a5-124b-4867-8e98-42b6b5843541/volumes" Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.436498 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.813549 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="add41260-19c8-4989-a0a9-97a93316c6e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.112:5671: connect: connection refused" Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.990657 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"9ca2e7888a1cca02540e9c6cd4ef37db5f1040c0badd43df2d96c6fe12d3e915"} Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.990781 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"18d1840fc7f916ca3acf693b9bd37709f09a50ef050df46f54757fba8b9e4db0"} Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.990799 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"0b127c8414b6fb173d9f73966c04e38fa4cd3f9ffac72635f49003cdb62a7a62"} Jan 23 13:50:03 crc kubenswrapper[4771]: I0123 13:50:03.992879 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerStarted","Data":"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0"} Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.313312 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-r9jcp"] Jan 23 13:50:04 crc kubenswrapper[4771]: E0123 13:50:04.317484 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f26e5a5-124b-4867-8e98-42b6b5843541" containerName="ovn-config" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.317526 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f26e5a5-124b-4867-8e98-42b6b5843541" containerName="ovn-config" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.317738 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f26e5a5-124b-4867-8e98-42b6b5843541" containerName="ovn-config" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.318686 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.321219 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.350986 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9jcp"] Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.431300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.431513 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlhrn\" (UniqueName: \"kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.533388 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlhrn\" (UniqueName: \"kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.533919 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.535036 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.552593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlhrn\" (UniqueName: \"kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn\") pod \"root-account-create-update-r9jcp\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:04 crc kubenswrapper[4771]: I0123 13:50:04.677491 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:05 crc kubenswrapper[4771]: I0123 13:50:05.022559 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"6c537218d3f8622e3b84e34583c8dc353d2252062e92cfa47ff24a8983ce1c62"} Jan 23 13:50:05 crc kubenswrapper[4771]: I0123 13:50:05.023021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"ce862fff330e2fcd7401c0b7e2c1d02ee33d7ba4547a4595c0699bc5824b5546"} Jan 23 13:50:05 crc kubenswrapper[4771]: I0123 13:50:05.023043 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"398b1206bd69625f22129aa15dbd558c89c83fb5077df645fbc4ed903b9c5f14"} Jan 23 13:50:05 crc kubenswrapper[4771]: I0123 13:50:05.215766 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-r9jcp"] Jan 23 13:50:05 crc kubenswrapper[4771]: W0123 13:50:05.231732 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89d485bf_c7b4_41ab_b3d1_117d98e1df46.slice/crio-e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5 WatchSource:0}: Error finding container e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5: Status 404 returned error can't find the container with id e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5 Jan 23 13:50:06 crc kubenswrapper[4771]: I0123 13:50:06.035540 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"ad2ad01aec166106f8f3dd0f8825e4faa4b0944ce78e2bcc243837ba01320959"} Jan 23 13:50:06 crc kubenswrapper[4771]: I0123 13:50:06.040093 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9jcp" event={"ID":"89d485bf-c7b4-41ab-b3d1-117d98e1df46","Type":"ContainerStarted","Data":"e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5"} Jan 23 13:50:09 crc kubenswrapper[4771]: I0123 13:50:09.068435 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9jcp" event={"ID":"89d485bf-c7b4-41ab-b3d1-117d98e1df46","Type":"ContainerStarted","Data":"4cd87ed3432a8babfe473cb57e1ce5be121c129f12ae8277c20d11edf3f9b680"} Jan 23 13:50:10 crc kubenswrapper[4771]: I0123 13:50:10.083176 4771 generic.go:334] "Generic (PLEG): container finished" podID="89d485bf-c7b4-41ab-b3d1-117d98e1df46" containerID="4cd87ed3432a8babfe473cb57e1ce5be121c129f12ae8277c20d11edf3f9b680" exitCode=0 Jan 23 13:50:10 crc kubenswrapper[4771]: I0123 13:50:10.083281 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9jcp" event={"ID":"89d485bf-c7b4-41ab-b3d1-117d98e1df46","Type":"ContainerDied","Data":"4cd87ed3432a8babfe473cb57e1ce5be121c129f12ae8277c20d11edf3f9b680"} Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.095699 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"649f05538d836c19b18952b852458d994ac23cc0bc481fe404644b9cf96f7a4d"} Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.096134 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"15e7678ea27b61f64be619a886045d89be332251cb60fcca612f9f5e4c3b6954"} Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.096146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"f97d1ea925ac73792bc7c5e283dbf529e72111f20876cdcbe6fd6ff7ac022742"} Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.572205 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.680355 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts\") pod \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.680701 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlhrn\" (UniqueName: \"kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn\") pod \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\" (UID: \"89d485bf-c7b4-41ab-b3d1-117d98e1df46\") " Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.682528 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89d485bf-c7b4-41ab-b3d1-117d98e1df46" (UID: "89d485bf-c7b4-41ab-b3d1-117d98e1df46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.685358 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn" (OuterVolumeSpecName: "kube-api-access-xlhrn") pod "89d485bf-c7b4-41ab-b3d1-117d98e1df46" (UID: "89d485bf-c7b4-41ab-b3d1-117d98e1df46"). InnerVolumeSpecName "kube-api-access-xlhrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.782894 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlhrn\" (UniqueName: \"kubernetes.io/projected/89d485bf-c7b4-41ab-b3d1-117d98e1df46-kube-api-access-xlhrn\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:11 crc kubenswrapper[4771]: I0123 13:50:11.782938 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89d485bf-c7b4-41ab-b3d1-117d98e1df46-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.112928 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"94f257146bb01fef69b3484c8f8f8817e40cabe90168f48efc2241d7f56afcc0"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.112984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"6aea0e3f51e2707e847ba42b19bcc6aad162114732987da0da97d4bff42750f2"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.112999 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"17124bff58af30d118e0de38205eabc15d8c895cd170614b0f353b9fff636103"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.113012 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cb429d80-3c7c-4014-9a5c-d40256e70014","Type":"ContainerStarted","Data":"fe9248d740b78646bf5127782c0e2c440d392e47e05c98be407d9d4d04879e96"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.116922 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6c312ce-f6df-4617-ba37-6675897fa368" containerID="28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0" exitCode=0 Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.116987 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerDied","Data":"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.126853 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-r9jcp" event={"ID":"89d485bf-c7b4-41ab-b3d1-117d98e1df46","Type":"ContainerDied","Data":"e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5"} Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.126907 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1516f4e9f209fcafc8d2b01e3d1d2612138795193aa5f64089245822a5e32a5" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.126989 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-r9jcp" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.172667 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.869871249 podStartE2EDuration="44.172635068s" podCreationTimestamp="2026-01-23 13:49:28 +0000 UTC" firstStartedPulling="2026-01-23 13:50:01.930728393 +0000 UTC m=+1042.953266018" lastFinishedPulling="2026-01-23 13:50:10.233492212 +0000 UTC m=+1051.256029837" observedRunningTime="2026-01-23 13:50:12.164332056 +0000 UTC m=+1053.186869681" watchObservedRunningTime="2026-01-23 13:50:12.172635068 +0000 UTC m=+1053.195172693" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.544540 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:50:12 crc kubenswrapper[4771]: E0123 13:50:12.545235 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d485bf-c7b4-41ab-b3d1-117d98e1df46" containerName="mariadb-account-create-update" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.545256 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d485bf-c7b4-41ab-b3d1-117d98e1df46" containerName="mariadb-account-create-update" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.545461 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d485bf-c7b4-41ab-b3d1-117d98e1df46" containerName="mariadb-account-create-update" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.546496 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.548673 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.565861 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.676550 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708059 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708130 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq2bv\" (UniqueName: \"kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708505 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.708564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810599 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810674 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810720 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810819 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.810841 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq2bv\" (UniqueName: \"kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.812149 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.812708 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.813210 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.813977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.814326 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.829721 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq2bv\" (UniqueName: \"kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv\") pod \"dnsmasq-dns-57589d46b9-454st\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:12 crc kubenswrapper[4771]: I0123 13:50:12.866217 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:13 crc kubenswrapper[4771]: I0123 13:50:13.153730 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerStarted","Data":"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5"} Jan 23 13:50:13 crc kubenswrapper[4771]: I0123 13:50:13.385121 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:50:13 crc kubenswrapper[4771]: I0123 13:50:13.434652 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Jan 23 13:50:13 crc kubenswrapper[4771]: I0123 13:50:13.814367 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.014270 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-847pr"] Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.016886 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.041953 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-847pr"] Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.138673 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7213-account-create-update-z86pp"] Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.139294 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpwkg\" (UniqueName: \"kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.139384 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.140272 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.142802 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.151444 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7213-account-create-update-z86pp"] Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.172816 4771 generic.go:334] "Generic (PLEG): container finished" podID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerID="5cf5af7b6aa72aa9cbb95e293efb5512b0dacdd5956016eb56d678702f021aea" exitCode=0 Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.172881 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57589d46b9-454st" event={"ID":"486a5657-e74a-4037-9ec1-52b56b74bb1e","Type":"ContainerDied","Data":"5cf5af7b6aa72aa9cbb95e293efb5512b0dacdd5956016eb56d678702f021aea"} Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.172924 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57589d46b9-454st" event={"ID":"486a5657-e74a-4037-9ec1-52b56b74bb1e","Type":"ContainerStarted","Data":"fbefe440071583a54a0f6b7eda00963084fb853434205bad7204f06af14b030f"} Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.241314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjsxh\" (UniqueName: \"kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.241403 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpwkg\" (UniqueName: \"kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.241462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.241501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.243308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.344030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjsxh\" (UniqueName: \"kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.344148 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.345065 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.364491 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjsxh\" (UniqueName: \"kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh\") pod \"glance-7213-account-create-update-z86pp\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.366373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpwkg\" (UniqueName: \"kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg\") pod \"glance-db-create-847pr\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " pod="openstack/glance-db-create-847pr" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.454804 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:14 crc kubenswrapper[4771]: I0123 13:50:14.634172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-847pr" Jan 23 13:50:15 crc kubenswrapper[4771]: I0123 13:50:15.266519 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7213-account-create-update-z86pp"] Jan 23 13:50:15 crc kubenswrapper[4771]: I0123 13:50:15.559585 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-847pr"] Jan 23 13:50:15 crc kubenswrapper[4771]: W0123 13:50:15.574090 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaacf8e04_67b0_426d_a9fb_6eddaf9d2887.slice/crio-63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343 WatchSource:0}: Error finding container 63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343: Status 404 returned error can't find the container with id 63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343 Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.195064 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerStarted","Data":"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.195147 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerStarted","Data":"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.198864 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57589d46b9-454st" event={"ID":"486a5657-e74a-4037-9ec1-52b56b74bb1e","Type":"ContainerStarted","Data":"46cd01ddc1a2d8b0a702f6385d53048e7ba1fd7a2a4e12a507fd00576dd5c2e6"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.199031 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.201330 4771 generic.go:334] "Generic (PLEG): container finished" podID="c60896ed-8589-4227-b109-0350ff91d3d2" containerID="2655d8cca67d1ed15219795cd0ffb7097d686a9d9f93a959d0747d50b6d56da1" exitCode=0 Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.201389 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7213-account-create-update-z86pp" event={"ID":"c60896ed-8589-4227-b109-0350ff91d3d2","Type":"ContainerDied","Data":"2655d8cca67d1ed15219795cd0ffb7097d686a9d9f93a959d0747d50b6d56da1"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.201461 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7213-account-create-update-z86pp" event={"ID":"c60896ed-8589-4227-b109-0350ff91d3d2","Type":"ContainerStarted","Data":"c4ea0d64d8dff9d32ac3fce52cba574a1ff0cd8ca50a4746fbb267853f1d6e73"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.203955 4771 generic.go:334] "Generic (PLEG): container finished" podID="aacf8e04-67b0-426d-a9fb-6eddaf9d2887" containerID="85ae269609b66f6acf93bcc0e892e75e9ccd8c465309b6295456328f47b9fe2e" exitCode=0 Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.204013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-847pr" event={"ID":"aacf8e04-67b0-426d-a9fb-6eddaf9d2887","Type":"ContainerDied","Data":"85ae269609b66f6acf93bcc0e892e75e9ccd8c465309b6295456328f47b9fe2e"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.204040 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-847pr" event={"ID":"aacf8e04-67b0-426d-a9fb-6eddaf9d2887","Type":"ContainerStarted","Data":"63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343"} Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.231757 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.231734327 podStartE2EDuration="18.231734327s" podCreationTimestamp="2026-01-23 13:49:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:16.223649821 +0000 UTC m=+1057.246187486" watchObservedRunningTime="2026-01-23 13:50:16.231734327 +0000 UTC m=+1057.254271962" Jan 23 13:50:16 crc kubenswrapper[4771]: I0123 13:50:16.289857 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57589d46b9-454st" podStartSLOduration=4.28982632 podStartE2EDuration="4.28982632s" podCreationTimestamp="2026-01-23 13:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:16.287472016 +0000 UTC m=+1057.310009641" watchObservedRunningTime="2026-01-23 13:50:16.28982632 +0000 UTC m=+1057.312363945" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.700038 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-847pr" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.711173 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.823800 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts\") pod \"c60896ed-8589-4227-b109-0350ff91d3d2\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.823930 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjsxh\" (UniqueName: \"kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh\") pod \"c60896ed-8589-4227-b109-0350ff91d3d2\" (UID: \"c60896ed-8589-4227-b109-0350ff91d3d2\") " Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.824059 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpwkg\" (UniqueName: \"kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg\") pod \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.824142 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts\") pod \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\" (UID: \"aacf8e04-67b0-426d-a9fb-6eddaf9d2887\") " Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.824493 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c60896ed-8589-4227-b109-0350ff91d3d2" (UID: "c60896ed-8589-4227-b109-0350ff91d3d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.824731 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aacf8e04-67b0-426d-a9fb-6eddaf9d2887" (UID: "aacf8e04-67b0-426d-a9fb-6eddaf9d2887"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.824832 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c60896ed-8589-4227-b109-0350ff91d3d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.831971 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh" (OuterVolumeSpecName: "kube-api-access-sjsxh") pod "c60896ed-8589-4227-b109-0350ff91d3d2" (UID: "c60896ed-8589-4227-b109-0350ff91d3d2"). InnerVolumeSpecName "kube-api-access-sjsxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.832668 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg" (OuterVolumeSpecName: "kube-api-access-rpwkg") pod "aacf8e04-67b0-426d-a9fb-6eddaf9d2887" (UID: "aacf8e04-67b0-426d-a9fb-6eddaf9d2887"). InnerVolumeSpecName "kube-api-access-rpwkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.927153 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjsxh\" (UniqueName: \"kubernetes.io/projected/c60896ed-8589-4227-b109-0350ff91d3d2-kube-api-access-sjsxh\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.927203 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpwkg\" (UniqueName: \"kubernetes.io/projected/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-kube-api-access-rpwkg\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:17 crc kubenswrapper[4771]: I0123 13:50:17.927220 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aacf8e04-67b0-426d-a9fb-6eddaf9d2887-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.225135 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-847pr" Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.225147 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-847pr" event={"ID":"aacf8e04-67b0-426d-a9fb-6eddaf9d2887","Type":"ContainerDied","Data":"63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343"} Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.225197 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63fd74cdcc159f697621fd53dbff5da1218710ee805c1dbf2019929be535d343" Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.226984 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7213-account-create-update-z86pp" event={"ID":"c60896ed-8589-4227-b109-0350ff91d3d2","Type":"ContainerDied","Data":"c4ea0d64d8dff9d32ac3fce52cba574a1ff0cd8ca50a4746fbb267853f1d6e73"} Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.227022 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ea0d64d8dff9d32ac3fce52cba574a1ff0cd8ca50a4746fbb267853f1d6e73" Jan 23 13:50:18 crc kubenswrapper[4771]: I0123 13:50:18.227035 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7213-account-create-update-z86pp" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.291512 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-47x2v"] Jan 23 13:50:19 crc kubenswrapper[4771]: E0123 13:50:19.292503 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60896ed-8589-4227-b109-0350ff91d3d2" containerName="mariadb-account-create-update" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.292523 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60896ed-8589-4227-b109-0350ff91d3d2" containerName="mariadb-account-create-update" Jan 23 13:50:19 crc kubenswrapper[4771]: E0123 13:50:19.292544 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacf8e04-67b0-426d-a9fb-6eddaf9d2887" containerName="mariadb-database-create" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.292553 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacf8e04-67b0-426d-a9fb-6eddaf9d2887" containerName="mariadb-database-create" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.292818 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacf8e04-67b0-426d-a9fb-6eddaf9d2887" containerName="mariadb-database-create" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.292895 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60896ed-8589-4227-b109-0350ff91d3d2" containerName="mariadb-account-create-update" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.293920 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.296464 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j5h54" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.300302 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.309014 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-47x2v"] Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.368957 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlxk6\" (UniqueName: \"kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.369085 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.369120 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.369228 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.378875 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.471977 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.472279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlxk6\" (UniqueName: \"kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.472310 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.472338 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.479066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.479156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.479700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.491896 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlxk6\" (UniqueName: \"kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6\") pod \"glance-db-sync-47x2v\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:19 crc kubenswrapper[4771]: I0123 13:50:19.617239 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-47x2v" Jan 23 13:50:20 crc kubenswrapper[4771]: I0123 13:50:20.184241 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-47x2v"] Jan 23 13:50:20 crc kubenswrapper[4771]: W0123 13:50:20.186476 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aa3ff81_43f1_4fcb_8c40_95d7aa786a06.slice/crio-02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6 WatchSource:0}: Error finding container 02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6: Status 404 returned error can't find the container with id 02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6 Jan 23 13:50:20 crc kubenswrapper[4771]: I0123 13:50:20.268192 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-47x2v" event={"ID":"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06","Type":"ContainerStarted","Data":"02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6"} Jan 23 13:50:22 crc kubenswrapper[4771]: I0123 13:50:22.679728 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 13:50:22 crc kubenswrapper[4771]: I0123 13:50:22.868068 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:50:22 crc kubenswrapper[4771]: I0123 13:50:22.969587 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:50:22 crc kubenswrapper[4771]: I0123 13:50:22.969865 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-858b8668f9-p866q" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="dnsmasq-dns" containerID="cri-o://0950f7746297950384c56a9218038f2a7ee2b7618033713147fbb195a90e8ec3" gracePeriod=10 Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.219669 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-n7p4s"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.221541 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.317585 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kd96t"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.319000 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.347730 4771 generic.go:334] "Generic (PLEG): container finished" podID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerID="0950f7746297950384c56a9218038f2a7ee2b7618033713147fbb195a90e8ec3" exitCode=0 Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.347799 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-858b8668f9-p866q" event={"ID":"0eee851c-9eea-48a7-a2c1-3444eb1738de","Type":"ContainerDied","Data":"0950f7746297950384c56a9218038f2a7ee2b7618033713147fbb195a90e8ec3"} Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.371552 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kd96t"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.385510 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bh6\" (UniqueName: \"kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.385828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.391684 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n7p4s"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.441621 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.455726 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-858b8668f9-p866q" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.493727 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcp5\" (UniqueName: \"kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.493793 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2bh6\" (UniqueName: \"kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.493835 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.493885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.494764 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.555497 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-mzql6"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.574954 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.583252 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.583516 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-78rbh" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.597351 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2bh6\" (UniqueName: \"kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6\") pod \"barbican-db-create-n7p4s\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.599005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcp5\" (UniqueName: \"kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.599101 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.599184 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f90b-account-create-update-fmb96"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.601457 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.602926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.619916 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.639806 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcp5\" (UniqueName: \"kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5\") pod \"cinder-db-create-kd96t\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.644117 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f90b-account-create-update-fmb96"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.681096 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.685741 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-mzql6"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706091 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706173 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706203 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjlbz\" (UniqueName: \"kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706231 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5ln\" (UniqueName: \"kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706258 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.706298 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.750699 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-67de-account-create-update-v9cgb"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.752199 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.758893 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.807842 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.807959 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.807995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjlbz\" (UniqueName: \"kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.808030 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh5ln\" (UniqueName: \"kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.808072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.808127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.809733 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.823554 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.840137 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.840884 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-67de-account-create-update-v9cgb"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.856589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.861078 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.862464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh5ln\" (UniqueName: \"kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln\") pod \"barbican-f90b-account-create-update-fmb96\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.863714 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.874993 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjlbz\" (UniqueName: \"kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz\") pod \"watcher-db-sync-mzql6\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.908189 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wt562"] Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.910004 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.910057 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdqq\" (UniqueName: \"kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.910206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.920350 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.920715 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.920856 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d69g7" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.920982 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 13:50:23 crc kubenswrapper[4771]: I0123 13:50:23.971338 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wt562"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.011353 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-g8xc5"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.013549 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.021790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.021848 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.021881 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swq5s\" (UniqueName: \"kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.021961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.022025 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdqq\" (UniqueName: \"kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.023244 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.038998 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-mzql6" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.049869 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g8xc5"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.077315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdqq\" (UniqueName: \"kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq\") pod \"cinder-67de-account-create-update-v9cgb\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.104738 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d677-account-create-update-blbpn"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.114944 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.123215 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.123708 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d677-account-create-update-blbpn"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.125899 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjm7r\" (UniqueName: \"kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.126210 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.126699 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.126734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.126894 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swq5s\" (UniqueName: \"kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.134558 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.139522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.171212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swq5s\" (UniqueName: \"kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s\") pod \"keystone-db-sync-wt562\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.194254 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.242519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjm7r\" (UniqueName: \"kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.242663 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.242803 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkv7\" (UniqueName: \"kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.242833 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.244349 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.266232 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjm7r\" (UniqueName: \"kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r\") pod \"neutron-db-create-g8xc5\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.288950 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.361129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkv7\" (UniqueName: \"kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.361186 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.361600 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.377382 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.386665 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkv7\" (UniqueName: \"kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7\") pod \"neutron-d677-account-create-update-blbpn\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.388830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-858b8668f9-p866q" event={"ID":"0eee851c-9eea-48a7-a2c1-3444eb1738de","Type":"ContainerDied","Data":"86cb50c7c70139ced0754ebb61c9f96cd8b7b7853085d7c7a0347141c0d8a32f"} Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.388892 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cb50c7c70139ced0754ebb61c9f96cd8b7b7853085d7c7a0347141c0d8a32f" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.402223 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.435309 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.467500 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kd96t"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.568557 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb\") pod \"0eee851c-9eea-48a7-a2c1-3444eb1738de\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.568619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb\") pod \"0eee851c-9eea-48a7-a2c1-3444eb1738de\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.568734 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnxhv\" (UniqueName: \"kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv\") pod \"0eee851c-9eea-48a7-a2c1-3444eb1738de\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.568757 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config\") pod \"0eee851c-9eea-48a7-a2c1-3444eb1738de\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.568804 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc\") pod \"0eee851c-9eea-48a7-a2c1-3444eb1738de\" (UID: \"0eee851c-9eea-48a7-a2c1-3444eb1738de\") " Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.569323 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n7p4s"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.588745 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv" (OuterVolumeSpecName: "kube-api-access-wnxhv") pod "0eee851c-9eea-48a7-a2c1-3444eb1738de" (UID: "0eee851c-9eea-48a7-a2c1-3444eb1738de"). InnerVolumeSpecName "kube-api-access-wnxhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.666585 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0eee851c-9eea-48a7-a2c1-3444eb1738de" (UID: "0eee851c-9eea-48a7-a2c1-3444eb1738de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.670907 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f90b-account-create-update-fmb96"] Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.671258 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.671333 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnxhv\" (UniqueName: \"kubernetes.io/projected/0eee851c-9eea-48a7-a2c1-3444eb1738de-kube-api-access-wnxhv\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:24 crc kubenswrapper[4771]: W0123 13:50:24.674227 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19bf99b9_22eb_424f_b882_3f65e37f71fa.slice/crio-bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f WatchSource:0}: Error finding container bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f: Status 404 returned error can't find the container with id bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.677474 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config" (OuterVolumeSpecName: "config") pod "0eee851c-9eea-48a7-a2c1-3444eb1738de" (UID: "0eee851c-9eea-48a7-a2c1-3444eb1738de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.705774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0eee851c-9eea-48a7-a2c1-3444eb1738de" (UID: "0eee851c-9eea-48a7-a2c1-3444eb1738de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.714961 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0eee851c-9eea-48a7-a2c1-3444eb1738de" (UID: "0eee851c-9eea-48a7-a2c1-3444eb1738de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.773919 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.774464 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.774478 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eee851c-9eea-48a7-a2c1-3444eb1738de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:24 crc kubenswrapper[4771]: I0123 13:50:24.965337 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-mzql6"] Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.072131 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d677-account-create-update-blbpn"] Jan 23 13:50:25 crc kubenswrapper[4771]: W0123 13:50:25.098467 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda06c31d6_aa39_4c7a_999a_f94f49e13a43.slice/crio-d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f WatchSource:0}: Error finding container d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f: Status 404 returned error can't find the container with id d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.122957 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wt562"] Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.223671 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-67de-account-create-update-v9cgb"] Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.264158 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g8xc5"] Jan 23 13:50:25 crc kubenswrapper[4771]: W0123 13:50:25.287640 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdf937b2_cc0d_46a0_a3a7_f7cf3c25653b.slice/crio-43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6 WatchSource:0}: Error finding container 43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6: Status 404 returned error can't find the container with id 43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6 Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.409562 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-67de-account-create-update-v9cgb" event={"ID":"0916c80f-b8f7-4594-9409-fa66861ec3be","Type":"ContainerStarted","Data":"0ac30ed0e5003c7158e8a17239c7576db71e0e99204b350d548102be6f25bfee"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.412694 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d677-account-create-update-blbpn" event={"ID":"a06c31d6-aa39-4c7a-999a-f94f49e13a43","Type":"ContainerStarted","Data":"d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.418599 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-mzql6" event={"ID":"6a326387-5c33-41e0-b73a-8670ae5b0c48","Type":"ContainerStarted","Data":"76b5426cd44c24f5521a7ce86eed98a2ed379137203e98a8849db41760a50c68"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.424382 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7p4s" event={"ID":"89a6fa30-c5bf-4a35-981a-0681769b8da5","Type":"ContainerStarted","Data":"16fdffd4363663891bedc2baa8b1d277ddc917de0647d6707afba8db55eb8f13"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.424429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7p4s" event={"ID":"89a6fa30-c5bf-4a35-981a-0681769b8da5","Type":"ContainerStarted","Data":"89334bd7e32c4806d8aebc24e17da9474dea4c66a3da6bf7aecc40f2b03c2dcb"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.432682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g8xc5" event={"ID":"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b","Type":"ContainerStarted","Data":"43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.439267 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kd96t" event={"ID":"38b08526-186a-47d2-b829-e8e85677343d","Type":"ContainerStarted","Data":"400548e21e6cb7a964a4f566a2e24cd4d7acf2c4e96c6d4b2f7613a34f9b38d2"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.439340 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kd96t" event={"ID":"38b08526-186a-47d2-b829-e8e85677343d","Type":"ContainerStarted","Data":"0277ba191f7105a6d90d99ffa68f8e53689dbcecc27f4a1f8b8b8263c6c99911"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.448154 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-n7p4s" podStartSLOduration=2.448130946 podStartE2EDuration="2.448130946s" podCreationTimestamp="2026-01-23 13:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:25.445531503 +0000 UTC m=+1066.468069148" watchObservedRunningTime="2026-01-23 13:50:25.448130946 +0000 UTC m=+1066.470668571" Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.476105 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wt562" event={"ID":"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1","Type":"ContainerStarted","Data":"11f4cdf8cf6062271aaf4de81cbacfbab96b25777d029c9506133f402bb1e368"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.493317 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-kd96t" podStartSLOduration=2.49327734 podStartE2EDuration="2.49327734s" podCreationTimestamp="2026-01-23 13:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:25.465859515 +0000 UTC m=+1066.488397140" watchObservedRunningTime="2026-01-23 13:50:25.49327734 +0000 UTC m=+1066.515814955" Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.501561 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-858b8668f9-p866q" Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.504359 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f90b-account-create-update-fmb96" event={"ID":"19bf99b9-22eb-424f-b882-3f65e37f71fa","Type":"ContainerStarted","Data":"701a8c0b0333eafb395408c1ec27f8e2a8f13327c0d590a01625bef4a2127ad6"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.504440 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f90b-account-create-update-fmb96" event={"ID":"19bf99b9-22eb-424f-b882-3f65e37f71fa","Type":"ContainerStarted","Data":"bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f"} Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.560194 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-f90b-account-create-update-fmb96" podStartSLOduration=2.56015828 podStartE2EDuration="2.56015828s" podCreationTimestamp="2026-01-23 13:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:25.550495946 +0000 UTC m=+1066.573033581" watchObservedRunningTime="2026-01-23 13:50:25.56015828 +0000 UTC m=+1066.582695905" Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.632488 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:50:25 crc kubenswrapper[4771]: I0123 13:50:25.640107 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-858b8668f9-p866q"] Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.527778 4771 generic.go:334] "Generic (PLEG): container finished" podID="19bf99b9-22eb-424f-b882-3f65e37f71fa" containerID="701a8c0b0333eafb395408c1ec27f8e2a8f13327c0d590a01625bef4a2127ad6" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.528510 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f90b-account-create-update-fmb96" event={"ID":"19bf99b9-22eb-424f-b882-3f65e37f71fa","Type":"ContainerDied","Data":"701a8c0b0333eafb395408c1ec27f8e2a8f13327c0d590a01625bef4a2127ad6"} Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.535611 4771 generic.go:334] "Generic (PLEG): container finished" podID="0916c80f-b8f7-4594-9409-fa66861ec3be" containerID="a7277b542fcde208d3b5999a839053f639daf47d31622c39fac505422cb05abc" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.535692 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-67de-account-create-update-v9cgb" event={"ID":"0916c80f-b8f7-4594-9409-fa66861ec3be","Type":"ContainerDied","Data":"a7277b542fcde208d3b5999a839053f639daf47d31622c39fac505422cb05abc"} Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.541110 4771 generic.go:334] "Generic (PLEG): container finished" podID="a06c31d6-aa39-4c7a-999a-f94f49e13a43" containerID="38a2ec8809c518853f049533d324593df7c998b6605d5edd12e9b8b730dbf454" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.541181 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d677-account-create-update-blbpn" event={"ID":"a06c31d6-aa39-4c7a-999a-f94f49e13a43","Type":"ContainerDied","Data":"38a2ec8809c518853f049533d324593df7c998b6605d5edd12e9b8b730dbf454"} Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.544842 4771 generic.go:334] "Generic (PLEG): container finished" podID="89a6fa30-c5bf-4a35-981a-0681769b8da5" containerID="16fdffd4363663891bedc2baa8b1d277ddc917de0647d6707afba8db55eb8f13" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.544982 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7p4s" event={"ID":"89a6fa30-c5bf-4a35-981a-0681769b8da5","Type":"ContainerDied","Data":"16fdffd4363663891bedc2baa8b1d277ddc917de0647d6707afba8db55eb8f13"} Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.552662 4771 generic.go:334] "Generic (PLEG): container finished" podID="bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" containerID="bba9ce7364285e275aa3b9c9e231181fe047fbcc44dc04d10f026d371c03a11c" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.552766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g8xc5" event={"ID":"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b","Type":"ContainerDied","Data":"bba9ce7364285e275aa3b9c9e231181fe047fbcc44dc04d10f026d371c03a11c"} Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.555362 4771 generic.go:334] "Generic (PLEG): container finished" podID="38b08526-186a-47d2-b829-e8e85677343d" containerID="400548e21e6cb7a964a4f566a2e24cd4d7acf2c4e96c6d4b2f7613a34f9b38d2" exitCode=0 Jan 23 13:50:26 crc kubenswrapper[4771]: I0123 13:50:26.555456 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kd96t" event={"ID":"38b08526-186a-47d2-b829-e8e85677343d","Type":"ContainerDied","Data":"400548e21e6cb7a964a4f566a2e24cd4d7acf2c4e96c6d4b2f7613a34f9b38d2"} Jan 23 13:50:27 crc kubenswrapper[4771]: I0123 13:50:27.241925 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" path="/var/lib/kubelet/pods/0eee851c-9eea-48a7-a2c1-3444eb1738de/volumes" Jan 23 13:50:29 crc kubenswrapper[4771]: I0123 13:50:29.379718 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 13:50:29 crc kubenswrapper[4771]: I0123 13:50:29.386116 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 13:50:29 crc kubenswrapper[4771]: I0123 13:50:29.594382 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.312050 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.312165 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.598135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n7p4s" event={"ID":"89a6fa30-c5bf-4a35-981a-0681769b8da5","Type":"ContainerDied","Data":"89334bd7e32c4806d8aebc24e17da9474dea4c66a3da6bf7aecc40f2b03c2dcb"} Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.598187 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89334bd7e32c4806d8aebc24e17da9474dea4c66a3da6bf7aecc40f2b03c2dcb" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.600292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g8xc5" event={"ID":"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b","Type":"ContainerDied","Data":"43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6"} Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.600352 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43dc4307d13de3de8b368e488ad3b13c8dedc5da137dfa315a42900e8111bfb6" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.603333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f90b-account-create-update-fmb96" event={"ID":"19bf99b9-22eb-424f-b882-3f65e37f71fa","Type":"ContainerDied","Data":"bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f"} Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.603374 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf25116b3825ba87988ab422809f584f93d1468e28da55bb3fbab6ef877e7e8f" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.605705 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-67de-account-create-update-v9cgb" event={"ID":"0916c80f-b8f7-4594-9409-fa66861ec3be","Type":"ContainerDied","Data":"0ac30ed0e5003c7158e8a17239c7576db71e0e99204b350d548102be6f25bfee"} Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.605741 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac30ed0e5003c7158e8a17239c7576db71e0e99204b350d548102be6f25bfee" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.667297 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.688951 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.690592 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.732288 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783558 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh5ln\" (UniqueName: \"kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln\") pod \"19bf99b9-22eb-424f-b882-3f65e37f71fa\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783653 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpdqq\" (UniqueName: \"kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq\") pod \"0916c80f-b8f7-4594-9409-fa66861ec3be\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783696 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts\") pod \"19bf99b9-22eb-424f-b882-3f65e37f71fa\" (UID: \"19bf99b9-22eb-424f-b882-3f65e37f71fa\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783738 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts\") pod \"0916c80f-b8f7-4594-9409-fa66861ec3be\" (UID: \"0916c80f-b8f7-4594-9409-fa66861ec3be\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2bh6\" (UniqueName: \"kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6\") pod \"89a6fa30-c5bf-4a35-981a-0681769b8da5\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783889 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjm7r\" (UniqueName: \"kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r\") pod \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783925 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts\") pod \"89a6fa30-c5bf-4a35-981a-0681769b8da5\" (UID: \"89a6fa30-c5bf-4a35-981a-0681769b8da5\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.783976 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts\") pod \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\" (UID: \"bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b\") " Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.785010 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" (UID: "bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.785301 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0916c80f-b8f7-4594-9409-fa66861ec3be" (UID: "0916c80f-b8f7-4594-9409-fa66861ec3be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.785350 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89a6fa30-c5bf-4a35-981a-0681769b8da5" (UID: "89a6fa30-c5bf-4a35-981a-0681769b8da5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.785747 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19bf99b9-22eb-424f-b882-3f65e37f71fa" (UID: "19bf99b9-22eb-424f-b882-3f65e37f71fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.791229 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln" (OuterVolumeSpecName: "kube-api-access-zh5ln") pod "19bf99b9-22eb-424f-b882-3f65e37f71fa" (UID: "19bf99b9-22eb-424f-b882-3f65e37f71fa"). InnerVolumeSpecName "kube-api-access-zh5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.791739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6" (OuterVolumeSpecName: "kube-api-access-r2bh6") pod "89a6fa30-c5bf-4a35-981a-0681769b8da5" (UID: "89a6fa30-c5bf-4a35-981a-0681769b8da5"). InnerVolumeSpecName "kube-api-access-r2bh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.791859 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r" (OuterVolumeSpecName: "kube-api-access-tjm7r") pod "bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" (UID: "bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b"). InnerVolumeSpecName "kube-api-access-tjm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.802533 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq" (OuterVolumeSpecName: "kube-api-access-jpdqq") pod "0916c80f-b8f7-4594-9409-fa66861ec3be" (UID: "0916c80f-b8f7-4594-9409-fa66861ec3be"). InnerVolumeSpecName "kube-api-access-jpdqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887062 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjm7r\" (UniqueName: \"kubernetes.io/projected/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-kube-api-access-tjm7r\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887122 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89a6fa30-c5bf-4a35-981a-0681769b8da5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887138 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887155 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh5ln\" (UniqueName: \"kubernetes.io/projected/19bf99b9-22eb-424f-b882-3f65e37f71fa-kube-api-access-zh5ln\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887170 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpdqq\" (UniqueName: \"kubernetes.io/projected/0916c80f-b8f7-4594-9409-fa66861ec3be-kube-api-access-jpdqq\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887186 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19bf99b9-22eb-424f-b882-3f65e37f71fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887199 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0916c80f-b8f7-4594-9409-fa66861ec3be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:30 crc kubenswrapper[4771]: I0123 13:50:30.887214 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2bh6\" (UniqueName: \"kubernetes.io/projected/89a6fa30-c5bf-4a35-981a-0681769b8da5-kube-api-access-r2bh6\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:31 crc kubenswrapper[4771]: I0123 13:50:31.614815 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f90b-account-create-update-fmb96" Jan 23 13:50:31 crc kubenswrapper[4771]: I0123 13:50:31.614881 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n7p4s" Jan 23 13:50:31 crc kubenswrapper[4771]: I0123 13:50:31.614945 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g8xc5" Jan 23 13:50:31 crc kubenswrapper[4771]: I0123 13:50:31.614956 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-67de-account-create-update-v9cgb" Jan 23 13:50:40 crc kubenswrapper[4771]: E0123 13:50:40.551953 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 13:50:40 crc kubenswrapper[4771]: E0123 13:50:40.552480 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 23 13:50:40 crc kubenswrapper[4771]: E0123 13:50:40.553062 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.129.56.240:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlxk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-47x2v_openstack(5aa3ff81-43f1-4fcb-8c40-95d7aa786a06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:50:40 crc kubenswrapper[4771]: E0123 13:50:40.554327 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-47x2v" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" Jan 23 13:50:40 crc kubenswrapper[4771]: E0123 13:50:40.733364 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-47x2v" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.781702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kd96t" event={"ID":"38b08526-186a-47d2-b829-e8e85677343d","Type":"ContainerDied","Data":"0277ba191f7105a6d90d99ffa68f8e53689dbcecc27f4a1f8b8b8263c6c99911"} Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.782510 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0277ba191f7105a6d90d99ffa68f8e53689dbcecc27f4a1f8b8b8263c6c99911" Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.787235 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d677-account-create-update-blbpn" event={"ID":"a06c31d6-aa39-4c7a-999a-f94f49e13a43","Type":"ContainerDied","Data":"d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f"} Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.787273 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f86db95335416f937e2e9cf29199822b7e6527d0a1244bf1ad0f6b3d5ee05f" Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.843453 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:44 crc kubenswrapper[4771]: I0123 13:50:44.851817 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.003107 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkv7\" (UniqueName: \"kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7\") pod \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.003211 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts\") pod \"38b08526-186a-47d2-b829-e8e85677343d\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.003280 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts\") pod \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\" (UID: \"a06c31d6-aa39-4c7a-999a-f94f49e13a43\") " Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.003363 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwcp5\" (UniqueName: \"kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5\") pod \"38b08526-186a-47d2-b829-e8e85677343d\" (UID: \"38b08526-186a-47d2-b829-e8e85677343d\") " Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.003869 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38b08526-186a-47d2-b829-e8e85677343d" (UID: "38b08526-186a-47d2-b829-e8e85677343d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.004448 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a06c31d6-aa39-4c7a-999a-f94f49e13a43" (UID: "a06c31d6-aa39-4c7a-999a-f94f49e13a43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.004467 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38b08526-186a-47d2-b829-e8e85677343d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.010362 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5" (OuterVolumeSpecName: "kube-api-access-xwcp5") pod "38b08526-186a-47d2-b829-e8e85677343d" (UID: "38b08526-186a-47d2-b829-e8e85677343d"). InnerVolumeSpecName "kube-api-access-xwcp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.014852 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7" (OuterVolumeSpecName: "kube-api-access-rwkv7") pod "a06c31d6-aa39-4c7a-999a-f94f49e13a43" (UID: "a06c31d6-aa39-4c7a-999a-f94f49e13a43"). InnerVolumeSpecName "kube-api-access-rwkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.106838 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a06c31d6-aa39-4c7a-999a-f94f49e13a43-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.106890 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwcp5\" (UniqueName: \"kubernetes.io/projected/38b08526-186a-47d2-b829-e8e85677343d-kube-api-access-xwcp5\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.106909 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwkv7\" (UniqueName: \"kubernetes.io/projected/a06c31d6-aa39-4c7a-999a-f94f49e13a43-kube-api-access-rwkv7\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:45 crc kubenswrapper[4771]: E0123 13:50:45.254890 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 23 13:50:45 crc kubenswrapper[4771]: E0123 13:50:45.254963 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 23 13:50:45 crc kubenswrapper[4771]: E0123 13:50:45.255156 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.129.56.240:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mjlbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-mzql6_openstack(6a326387-5c33-41e0-b73a-8670ae5b0c48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:50:45 crc kubenswrapper[4771]: E0123 13:50:45.256362 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-mzql6" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.799701 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wt562" event={"ID":"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1","Type":"ContainerStarted","Data":"0be3692ff5636522a613b344a75d8813336b721bc19eaaee49433889018a304e"} Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.799804 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d677-account-create-update-blbpn" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.799917 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kd96t" Jan 23 13:50:45 crc kubenswrapper[4771]: E0123 13:50:45.801815 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-mzql6" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" Jan 23 13:50:45 crc kubenswrapper[4771]: I0123 13:50:45.830389 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wt562" podStartSLOduration=2.744436274 podStartE2EDuration="22.83036763s" podCreationTimestamp="2026-01-23 13:50:23 +0000 UTC" firstStartedPulling="2026-01-23 13:50:25.140364244 +0000 UTC m=+1066.162901859" lastFinishedPulling="2026-01-23 13:50:45.22629557 +0000 UTC m=+1086.248833215" observedRunningTime="2026-01-23 13:50:45.821847662 +0000 UTC m=+1086.844385287" watchObservedRunningTime="2026-01-23 13:50:45.83036763 +0000 UTC m=+1086.852905255" Jan 23 13:50:52 crc kubenswrapper[4771]: I0123 13:50:52.876867 4771 generic.go:334] "Generic (PLEG): container finished" podID="b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" containerID="0be3692ff5636522a613b344a75d8813336b721bc19eaaee49433889018a304e" exitCode=0 Jan 23 13:50:52 crc kubenswrapper[4771]: I0123 13:50:52.876977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wt562" event={"ID":"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1","Type":"ContainerDied","Data":"0be3692ff5636522a613b344a75d8813336b721bc19eaaee49433889018a304e"} Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.285848 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.403535 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swq5s\" (UniqueName: \"kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s\") pod \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.403848 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle\") pod \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.403949 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data\") pod \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\" (UID: \"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1\") " Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.411560 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s" (OuterVolumeSpecName: "kube-api-access-swq5s") pod "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" (UID: "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1"). InnerVolumeSpecName "kube-api-access-swq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.431726 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" (UID: "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.452898 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data" (OuterVolumeSpecName: "config-data") pod "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" (UID: "b16dd137-a1ef-4b9f-b6b9-8b70b3908db1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.506813 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swq5s\" (UniqueName: \"kubernetes.io/projected/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-kube-api-access-swq5s\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.506850 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.506863 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.900086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wt562" event={"ID":"b16dd137-a1ef-4b9f-b6b9-8b70b3908db1","Type":"ContainerDied","Data":"11f4cdf8cf6062271aaf4de81cbacfbab96b25777d029c9506133f402bb1e368"} Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.900148 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11f4cdf8cf6062271aaf4de81cbacfbab96b25777d029c9506133f402bb1e368" Jan 23 13:50:54 crc kubenswrapper[4771]: I0123 13:50:54.900605 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wt562" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.193892 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194482 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="init" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194507 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="init" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194528 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194538 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194568 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a6fa30-c5bf-4a35-981a-0681769b8da5" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194579 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a6fa30-c5bf-4a35-981a-0681769b8da5" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194597 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06c31d6-aa39-4c7a-999a-f94f49e13a43" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194606 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06c31d6-aa39-4c7a-999a-f94f49e13a43" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194619 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" containerName="keystone-db-sync" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194627 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" containerName="keystone-db-sync" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194639 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b08526-186a-47d2-b829-e8e85677343d" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194650 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b08526-186a-47d2-b829-e8e85677343d" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194664 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19bf99b9-22eb-424f-b882-3f65e37f71fa" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194673 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="19bf99b9-22eb-424f-b882-3f65e37f71fa" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194689 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0916c80f-b8f7-4594-9409-fa66861ec3be" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194698 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0916c80f-b8f7-4594-9409-fa66861ec3be" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: E0123 13:50:55.194716 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="dnsmasq-dns" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194726 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="dnsmasq-dns" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.194952 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a6fa30-c5bf-4a35-981a-0681769b8da5" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195004 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eee851c-9eea-48a7-a2c1-3444eb1738de" containerName="dnsmasq-dns" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195018 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195039 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b08526-186a-47d2-b829-e8e85677343d" containerName="mariadb-database-create" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195055 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06c31d6-aa39-4c7a-999a-f94f49e13a43" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195069 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0916c80f-b8f7-4594-9409-fa66861ec3be" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195081 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="19bf99b9-22eb-424f-b882-3f65e37f71fa" containerName="mariadb-account-create-update" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.195093 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" containerName="keystone-db-sync" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.196343 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.223801 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.223883 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.223945 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.223995 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.224043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.224079 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f77gg\" (UniqueName: \"kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.226712 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.261433 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5hgs2"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.262757 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.267451 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.268009 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.268754 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.269007 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d69g7" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.269226 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.307130 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5hgs2"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326456 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326504 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326620 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326666 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326715 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f77gg\" (UniqueName: \"kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326786 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvmr2\" (UniqueName: \"kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326906 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.326995 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.327050 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.328337 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.328483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.329202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.329819 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.335640 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.351463 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f77gg\" (UniqueName: \"kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg\") pod \"dnsmasq-dns-6f7756d4b7-krzcw\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.425468 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d7jd6"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.427509 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.429654 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.429699 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.429722 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvmr2\" (UniqueName: \"kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.430329 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ns665" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.430738 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.430927 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.433748 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.433800 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.433871 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.433876 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.438649 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d7jd6"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.442897 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.443883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.451987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.452883 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.497012 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvmr2\" (UniqueName: \"kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2\") pod \"keystone-bootstrap-5hgs2\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.533987 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.535825 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.535923 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.536008 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqcx\" (UniqueName: \"kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.536041 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.536092 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.536127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.571705 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8cggt"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.575145 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.589671 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.589992 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.590006 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n2nlk" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.608134 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.612111 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.614000 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.618961 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.619225 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.619357 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-wvhn4" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.619539 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.637890 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvtw\" (UniqueName: \"kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.637981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqcx\" (UniqueName: \"kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638025 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638090 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638129 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638216 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638259 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.638698 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.659548 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8cggt"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.664063 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.669473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.680808 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.683772 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.686391 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.688877 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.692380 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.698674 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqcx\" (UniqueName: \"kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.746010 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts\") pod \"cinder-db-sync-d7jd6\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.752714 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.785292 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.785513 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.818464 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvtw\" (UniqueName: \"kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.818744 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.818897 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65n6\" (UniqueName: \"kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.819009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.819051 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.819084 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.824830 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.835946 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.840818 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.920752 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.932478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvtw\" (UniqueName: \"kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw\") pod \"neutron-db-sync-8cggt\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944380 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944650 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944756 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65n6\" (UniqueName: \"kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944807 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944871 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.944897 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.945043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.945068 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.945130 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.945230 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.945333 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc74c\" (UniqueName: \"kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.946678 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.947142 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.947779 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.972162 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-42qfl"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.973013 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.974172 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.974948 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8cggt" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.994246 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-42qfl"] Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.995616 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 13:50:55 crc kubenswrapper[4771]: I0123 13:50:55.997005 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7xxs6" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.000909 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.009550 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.023559 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-cz4ft"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.025178 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.027491 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-68sv2" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.028348 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.029847 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.045877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65n6\" (UniqueName: \"kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6\") pod \"horizon-9987b5459-gpn75\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053532 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053619 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053688 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053754 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053862 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.053935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.054017 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.054132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc74c\" (UniqueName: \"kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.054247 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.054391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.057316 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.063785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.063891 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdq2b\" (UniqueName: \"kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064045 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kblgt\" (UniqueName: \"kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064142 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghqx\" (UniqueName: \"kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064466 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064555 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.064630 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.073244 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.076668 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.077955 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.083877 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cz4ft"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.100809 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.114500 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.115466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.116309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.118478 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.149018 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc74c\" (UniqueName: \"kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c\") pod \"ceilometer-0\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.154479 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.156567 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.174569 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.175580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.175734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.175816 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.175889 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdq2b\" (UniqueName: \"kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.175981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kblgt\" (UniqueName: \"kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.179534 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.183518 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.185657 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.186147 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.192628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.201226 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghqx\" (UniqueName: \"kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.201870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.202569 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.202813 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.202893 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.202999 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.203869 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.205130 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.224217 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.227522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.230161 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kblgt\" (UniqueName: \"kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.239055 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdq2b\" (UniqueName: \"kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.247796 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghqx\" (UniqueName: \"kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.250565 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data\") pod \"barbican-db-sync-42qfl\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.251268 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle\") pod \"placement-db-sync-cz4ft\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.257274 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key\") pod \"horizon-666d4c989c-wvzsc\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313463 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313619 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313706 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313728 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.313790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfst\" (UniqueName: \"kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.321147 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-42qfl" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.363083 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.403546 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cz4ft" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.414177 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.415998 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.416079 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.416141 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.416164 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.416194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.416231 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfst\" (UniqueName: \"kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.423899 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.424872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.426075 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.434872 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.435071 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.452182 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfst\" (UniqueName: \"kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst\") pod \"dnsmasq-dns-75b9f85775-829n5\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.513750 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:56 crc kubenswrapper[4771]: W0123 13:50:56.624951 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fbbc53f_bb5b_43b0_b9df_e8cc8ad9a762.slice/crio-1b8af79fbd8fe71c516cd9f042776b4e32680c5002f609c88d222c7ed9df5c3f WatchSource:0}: Error finding container 1b8af79fbd8fe71c516cd9f042776b4e32680c5002f609c88d222c7ed9df5c3f: Status 404 returned error can't find the container with id 1b8af79fbd8fe71c516cd9f042776b4e32680c5002f609c88d222c7ed9df5c3f Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.808521 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5hgs2"] Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.888462 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:50:56 crc kubenswrapper[4771]: I0123 13:50:56.893189 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d7jd6"] Jan 23 13:50:56 crc kubenswrapper[4771]: W0123 13:50:56.893392 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e70861a_6514_486d_8a0d_c60d649a25d1.slice/crio-f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08 WatchSource:0}: Error finding container f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08: Status 404 returned error can't find the container with id f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08 Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.002992 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5hgs2" event={"ID":"1e70861a-6514-486d-8a0d-c60d649a25d1","Type":"ContainerStarted","Data":"f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08"} Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.008682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" event={"ID":"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762","Type":"ContainerStarted","Data":"1b8af79fbd8fe71c516cd9f042776b4e32680c5002f609c88d222c7ed9df5c3f"} Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.193504 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.346768 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.365142 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8cggt"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.522656 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.535789 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-42qfl"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.750045 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cz4ft"] Jan 23 13:50:57 crc kubenswrapper[4771]: I0123 13:50:57.781523 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.096899 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8cggt" event={"ID":"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc","Type":"ContainerStarted","Data":"97c11b5aaa731ba63fb076b57926582f57dedfc2ab9ddff5231e4899e9baa2cd"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.096955 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8cggt" event={"ID":"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc","Type":"ContainerStarted","Data":"2e8c7d1adef5607cab52f5342a3db82f9d45cd78f483f1436c963748326455e2"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.103875 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-47x2v" event={"ID":"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06","Type":"ContainerStarted","Data":"7a7a23b53d954ad91a5b4531d71241368e6fd0a4546f105836c13cfe2ff7c43d"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.116423 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-666d4c989c-wvzsc" event={"ID":"61c40c2c-8ac1-4398-bef9-a89917abcc44","Type":"ContainerStarted","Data":"ec45dd95af6485f158a099f9d3560c370bd9f83ea0ba3e2f33b228e60a7e23b5"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.128357 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.159316 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8cggt" podStartSLOduration=3.15928488 podStartE2EDuration="3.15928488s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:58.120821016 +0000 UTC m=+1099.143358661" watchObservedRunningTime="2026-01-23 13:50:58.15928488 +0000 UTC m=+1099.181822515" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.166631 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b9f85775-829n5" event={"ID":"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3","Type":"ContainerStarted","Data":"b9a0e22df596ea9adc9fb907a4243a936ec5137251f316f825948d002444e17c"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.184921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-42qfl" event={"ID":"13f63357-c0a0-49eb-9011-bd32c84f414a","Type":"ContainerStarted","Data":"c78fbb961c88ce0024894b061b2ba68781596e8404afd0107e35705ce36bcdc9"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.210544 4771 generic.go:334] "Generic (PLEG): container finished" podID="1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" containerID="42051a5eca6719fc315cf4c6efa2b8294201bd77c2997ee3bdf14fd66cd51da9" exitCode=0 Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.210688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" event={"ID":"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762","Type":"ContainerDied","Data":"42051a5eca6719fc315cf4c6efa2b8294201bd77c2997ee3bdf14fd66cd51da9"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.231881 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d7jd6" event={"ID":"506b2de1-f73d-4781-a52d-3f622c78660d","Type":"ContainerStarted","Data":"e9678ac423d72f443a637ed293b47d585b6c1fd768e5a514efda8f6f02ee499d"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.234579 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerStarted","Data":"7134d8ec5944480e9bdbe56aae45138b3bcc0bc778bf8609e2671a843061f0a9"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.255710 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.257553 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.282718 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-47x2v" podStartSLOduration=4.128378979 podStartE2EDuration="39.282685103s" podCreationTimestamp="2026-01-23 13:50:19 +0000 UTC" firstStartedPulling="2026-01-23 13:50:20.188335351 +0000 UTC m=+1061.210872976" lastFinishedPulling="2026-01-23 13:50:55.342641475 +0000 UTC m=+1096.365179100" observedRunningTime="2026-01-23 13:50:58.176546504 +0000 UTC m=+1099.199084139" watchObservedRunningTime="2026-01-23 13:50:58.282685103 +0000 UTC m=+1099.305222728" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.283670 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cz4ft" event={"ID":"8fcfc471-7906-46f5-9238-4d66823ca1bf","Type":"ContainerStarted","Data":"0f0564fa4de829f2baf1f6af27c4ec153ec5b73666c7460b7d5293178ec9f992"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.342709 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.355447 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5hgs2" event={"ID":"1e70861a-6514-486d-8a0d-c60d649a25d1","Type":"ContainerStarted","Data":"467dd22754639004b711c0bbe3901271b9bf2feadb428d6c5284bc6dfaefc164"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.356912 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.406302 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5hgs2" podStartSLOduration=3.406278543 podStartE2EDuration="3.406278543s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:50:58.397884339 +0000 UTC m=+1099.420421964" watchObservedRunningTime="2026-01-23 13:50:58.406278543 +0000 UTC m=+1099.428816168" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.416215 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9987b5459-gpn75" event={"ID":"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4","Type":"ContainerStarted","Data":"5a2d0e86b8c0ec8cdb69cba4f4b7e5d62675e8a8c49823222e05c963e0cc3931"} Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.438378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.438524 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.438638 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6c5\" (UniqueName: \"kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.438655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.438705 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.541144 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.541260 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.542876 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.543962 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6c5\" (UniqueName: \"kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.544054 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.544308 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.545201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.548626 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.548887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.565713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6c5\" (UniqueName: \"kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5\") pod \"horizon-6f9546c5f-5w2td\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.644154 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.840641 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957208 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957287 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957661 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f77gg\" (UniqueName: \"kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957740 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.957853 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc\") pod \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\" (UID: \"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762\") " Jan 23 13:50:58 crc kubenswrapper[4771]: I0123 13:50:58.978798 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg" (OuterVolumeSpecName: "kube-api-access-f77gg") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "kube-api-access-f77gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.003922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.019853 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.020094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.053950 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.067016 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.067337 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.067455 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.067525 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.067591 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f77gg\" (UniqueName: \"kubernetes.io/projected/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-kube-api-access-f77gg\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.095944 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config" (OuterVolumeSpecName: "config") pod "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" (UID: "1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.184362 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.385257 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.455084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" event={"ID":"1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762","Type":"ContainerDied","Data":"1b8af79fbd8fe71c516cd9f042776b4e32680c5002f609c88d222c7ed9df5c3f"} Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.455169 4771 scope.go:117] "RemoveContainer" containerID="42051a5eca6719fc315cf4c6efa2b8294201bd77c2997ee3bdf14fd66cd51da9" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.455396 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7756d4b7-krzcw" Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.477273 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9546c5f-5w2td" event={"ID":"7aa616bb-c065-49e0-8dfc-d35709088801","Type":"ContainerStarted","Data":"882f9f8cb6b7dba947a68d4de646702e9e19a5a043dac2eafcdea8baa16c1683"} Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.492224 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-mzql6" event={"ID":"6a326387-5c33-41e0-b73a-8670ae5b0c48","Type":"ContainerStarted","Data":"ff5c952fbb74d6adf2903aa176ab04d30d75225d33f3e73b0e3647bb6ef2aa04"} Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.572733 4771 generic.go:334] "Generic (PLEG): container finished" podID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerID="a56c58735bf9bf0b1f702868165f319b0ce6dbebcbd81d7280c0d966a22224af" exitCode=0 Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.573188 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b9f85775-829n5" event={"ID":"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3","Type":"ContainerDied","Data":"a56c58735bf9bf0b1f702868165f319b0ce6dbebcbd81d7280c0d966a22224af"} Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.598545 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.612398 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f7756d4b7-krzcw"] Jan 23 13:50:59 crc kubenswrapper[4771]: I0123 13:50:59.614427 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-mzql6" podStartSLOduration=3.144733256 podStartE2EDuration="36.614385443s" podCreationTimestamp="2026-01-23 13:50:23 +0000 UTC" firstStartedPulling="2026-01-23 13:50:24.992154698 +0000 UTC m=+1066.014692323" lastFinishedPulling="2026-01-23 13:50:58.461806885 +0000 UTC m=+1099.484344510" observedRunningTime="2026-01-23 13:50:59.572237643 +0000 UTC m=+1100.594775268" watchObservedRunningTime="2026-01-23 13:50:59.614385443 +0000 UTC m=+1100.636923068" Jan 23 13:51:00 crc kubenswrapper[4771]: I0123 13:51:00.311880 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:51:00 crc kubenswrapper[4771]: I0123 13:51:00.311947 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:51:00 crc kubenswrapper[4771]: I0123 13:51:00.599651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b9f85775-829n5" event={"ID":"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3","Type":"ContainerStarted","Data":"2896a6cc4a2017672bbacac5fd6177688d60deaa40d0fcb76782c74ea1654e99"} Jan 23 13:51:00 crc kubenswrapper[4771]: I0123 13:51:00.601170 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:51:00 crc kubenswrapper[4771]: I0123 13:51:00.626317 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75b9f85775-829n5" podStartSLOduration=5.626293592 podStartE2EDuration="5.626293592s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:00.625641992 +0000 UTC m=+1101.648179627" watchObservedRunningTime="2026-01-23 13:51:00.626293592 +0000 UTC m=+1101.648831217" Jan 23 13:51:01 crc kubenswrapper[4771]: I0123 13:51:01.250742 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" path="/var/lib/kubelet/pods/1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762/volumes" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.112033 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.145984 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:51:04 crc kubenswrapper[4771]: E0123 13:51:04.148636 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" containerName="init" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.148664 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" containerName="init" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.148875 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fbbc53f-bb5b-43b0-b9df-e8cc8ad9a762" containerName="init" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.150020 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.168745 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.194902 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227493 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbnm\" (UniqueName: \"kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227672 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227751 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227823 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.227988 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.243903 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.282837 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57cbdcc8d-5lcfn"] Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.287397 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.297212 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57cbdcc8d-5lcfn"] Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330131 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330259 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330327 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330422 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbnm\" (UniqueName: \"kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330458 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.330958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.332108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.333091 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.338060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.338109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.352308 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbnm\" (UniqueName: \"kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.353189 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle\") pod \"horizon-99f77f8d8-2j9s2\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.433489 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-secret-key\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.433858 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vx8r\" (UniqueName: \"kubernetes.io/projected/dd12560a-7353-492b-8037-822d7aceb4e0-kube-api-access-9vx8r\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.433925 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-config-data\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.434018 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-scripts\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.434175 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-combined-ca-bundle\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.434271 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-tls-certs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.434467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd12560a-7353-492b-8037-822d7aceb4e0-logs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.507385 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537227 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-secret-key\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vx8r\" (UniqueName: \"kubernetes.io/projected/dd12560a-7353-492b-8037-822d7aceb4e0-kube-api-access-9vx8r\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537346 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-config-data\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537450 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-scripts\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537490 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-combined-ca-bundle\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-tls-certs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.537582 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd12560a-7353-492b-8037-822d7aceb4e0-logs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.538058 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd12560a-7353-492b-8037-822d7aceb4e0-logs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.540671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-scripts\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.541227 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dd12560a-7353-492b-8037-822d7aceb4e0-config-data\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.555313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-tls-certs\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.556095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-combined-ca-bundle\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.556436 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/dd12560a-7353-492b-8037-822d7aceb4e0-horizon-secret-key\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.559314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vx8r\" (UniqueName: \"kubernetes.io/projected/dd12560a-7353-492b-8037-822d7aceb4e0-kube-api-access-9vx8r\") pod \"horizon-57cbdcc8d-5lcfn\" (UID: \"dd12560a-7353-492b-8037-822d7aceb4e0\") " pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.606701 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.674077 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e70861a-6514-486d-8a0d-c60d649a25d1" containerID="467dd22754639004b711c0bbe3901271b9bf2feadb428d6c5284bc6dfaefc164" exitCode=0 Jan 23 13:51:04 crc kubenswrapper[4771]: I0123 13:51:04.674136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5hgs2" event={"ID":"1e70861a-6514-486d-8a0d-c60d649a25d1","Type":"ContainerDied","Data":"467dd22754639004b711c0bbe3901271b9bf2feadb428d6c5284bc6dfaefc164"} Jan 23 13:51:05 crc kubenswrapper[4771]: I0123 13:51:05.689157 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a326387-5c33-41e0-b73a-8670ae5b0c48" containerID="ff5c952fbb74d6adf2903aa176ab04d30d75225d33f3e73b0e3647bb6ef2aa04" exitCode=0 Jan 23 13:51:05 crc kubenswrapper[4771]: I0123 13:51:05.689373 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-mzql6" event={"ID":"6a326387-5c33-41e0-b73a-8670ae5b0c48","Type":"ContainerDied","Data":"ff5c952fbb74d6adf2903aa176ab04d30d75225d33f3e73b0e3647bb6ef2aa04"} Jan 23 13:51:06 crc kubenswrapper[4771]: I0123 13:51:06.893875 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:51:06 crc kubenswrapper[4771]: I0123 13:51:06.983316 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:51:06 crc kubenswrapper[4771]: I0123 13:51:06.983620 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" containerID="cri-o://46cd01ddc1a2d8b0a702f6385d53048e7ba1fd7a2a4e12a507fd00576dd5c2e6" gracePeriod=10 Jan 23 13:51:07 crc kubenswrapper[4771]: I0123 13:51:07.716514 4771 generic.go:334] "Generic (PLEG): container finished" podID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerID="46cd01ddc1a2d8b0a702f6385d53048e7ba1fd7a2a4e12a507fd00576dd5c2e6" exitCode=0 Jan 23 13:51:07 crc kubenswrapper[4771]: I0123 13:51:07.716557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57589d46b9-454st" event={"ID":"486a5657-e74a-4037-9ec1-52b56b74bb1e","Type":"ContainerDied","Data":"46cd01ddc1a2d8b0a702f6385d53048e7ba1fd7a2a4e12a507fd00576dd5c2e6"} Jan 23 13:51:07 crc kubenswrapper[4771]: I0123 13:51:07.867760 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 23 13:51:12 crc kubenswrapper[4771]: I0123 13:51:12.866823 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.722795 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.722886 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.723044 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dfh5c5hbhb7h58bh5b5h654h565h5c7h558h6dh5bch578h54ch58fhfhc4h656h67h56fh99h57chfdh54hchd5h684hc8h5c8h86h548h57q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ghqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-666d4c989c-wvzsc_openstack(61c40c2c-8ac1-4398-bef9-a89917abcc44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.751727 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-666d4c989c-wvzsc" podUID="61c40c2c-8ac1-4398-bef9-a89917abcc44" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.779640 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.779707 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.779850 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65fh587h64fh6dh5cbh675h75h55h589h55fh5d7hbfh5f6h5bdh57bh5fch696h68fhfbh557hfbh59h79h648h5d4h8bh668h665h66dh5dbh665h89q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g65n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-9987b5459-gpn75_openstack(8b6f8f8d-bcb6-481f-94d6-c82918ed42f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:51:14 crc kubenswrapper[4771]: E0123 13:51:14.783076 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-9987b5459-gpn75" podUID="8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" Jan 23 13:51:15 crc kubenswrapper[4771]: I0123 13:51:15.823467 4771 generic.go:334] "Generic (PLEG): container finished" podID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" containerID="7a7a23b53d954ad91a5b4531d71241368e6fd0a4546f105836c13cfe2ff7c43d" exitCode=0 Jan 23 13:51:15 crc kubenswrapper[4771]: I0123 13:51:15.823514 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-47x2v" event={"ID":"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06","Type":"ContainerDied","Data":"7a7a23b53d954ad91a5b4531d71241368e6fd0a4546f105836c13cfe2ff7c43d"} Jan 23 13:51:16 crc kubenswrapper[4771]: E0123 13:51:16.552886 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:16 crc kubenswrapper[4771]: E0123 13:51:16.552959 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 23 13:51:16 crc kubenswrapper[4771]: E0123 13:51:16.553171 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58fhf9hd8h5cbhbh568h78h5f9h664hc9h6bhb7h7h9dh587h548h5b8h565h5d7h57h6bh698h5ch5c9h65h5d8h64bh584h5cbh666hb8h64dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wf6c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f9546c5f-5w2td_openstack(7aa616bb-c065-49e0-8dfc-d35709088801): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:51:16 crc kubenswrapper[4771]: E0123 13:51:16.555284 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-6f9546c5f-5w2td" podUID="7aa616bb-c065-49e0-8dfc-d35709088801" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.662233 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.663580 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.671392 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-mzql6" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743015 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ghqx\" (UniqueName: \"kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx\") pod \"61c40c2c-8ac1-4398-bef9-a89917abcc44\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743487 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjlbz\" (UniqueName: \"kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz\") pod \"6a326387-5c33-41e0-b73a-8670ae5b0c48\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743575 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key\") pod \"61c40c2c-8ac1-4398-bef9-a89917abcc44\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743620 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data\") pod \"61c40c2c-8ac1-4398-bef9-a89917abcc44\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743709 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data\") pod \"6a326387-5c33-41e0-b73a-8670ae5b0c48\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743767 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743795 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data\") pod \"6a326387-5c33-41e0-b73a-8670ae5b0c48\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743863 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle\") pod \"6a326387-5c33-41e0-b73a-8670ae5b0c48\" (UID: \"6a326387-5c33-41e0-b73a-8670ae5b0c48\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743891 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvmr2\" (UniqueName: \"kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743933 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.743962 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.744008 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts\") pod \"61c40c2c-8ac1-4398-bef9-a89917abcc44\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.744115 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle\") pod \"1e70861a-6514-486d-8a0d-c60d649a25d1\" (UID: \"1e70861a-6514-486d-8a0d-c60d649a25d1\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.744143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs\") pod \"61c40c2c-8ac1-4398-bef9-a89917abcc44\" (UID: \"61c40c2c-8ac1-4398-bef9-a89917abcc44\") " Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.745339 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs" (OuterVolumeSpecName: "logs") pod "61c40c2c-8ac1-4398-bef9-a89917abcc44" (UID: "61c40c2c-8ac1-4398-bef9-a89917abcc44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.749431 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts" (OuterVolumeSpecName: "scripts") pod "61c40c2c-8ac1-4398-bef9-a89917abcc44" (UID: "61c40c2c-8ac1-4398-bef9-a89917abcc44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.753141 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data" (OuterVolumeSpecName: "config-data") pod "61c40c2c-8ac1-4398-bef9-a89917abcc44" (UID: "61c40c2c-8ac1-4398-bef9-a89917abcc44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.756583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6a326387-5c33-41e0-b73a-8670ae5b0c48" (UID: "6a326387-5c33-41e0-b73a-8670ae5b0c48"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.757388 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz" (OuterVolumeSpecName: "kube-api-access-mjlbz") pod "6a326387-5c33-41e0-b73a-8670ae5b0c48" (UID: "6a326387-5c33-41e0-b73a-8670ae5b0c48"). InnerVolumeSpecName "kube-api-access-mjlbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.757455 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.757579 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts" (OuterVolumeSpecName: "scripts") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.758303 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.761342 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "61c40c2c-8ac1-4398-bef9-a89917abcc44" (UID: "61c40c2c-8ac1-4398-bef9-a89917abcc44"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.763026 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx" (OuterVolumeSpecName: "kube-api-access-7ghqx") pod "61c40c2c-8ac1-4398-bef9-a89917abcc44" (UID: "61c40c2c-8ac1-4398-bef9-a89917abcc44"). InnerVolumeSpecName "kube-api-access-7ghqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.772081 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2" (OuterVolumeSpecName: "kube-api-access-dvmr2") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "kube-api-access-dvmr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.791868 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data" (OuterVolumeSpecName: "config-data") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.807182 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a326387-5c33-41e0-b73a-8670ae5b0c48" (UID: "6a326387-5c33-41e0-b73a-8670ae5b0c48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.815943 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e70861a-6514-486d-8a0d-c60d649a25d1" (UID: "1e70861a-6514-486d-8a0d-c60d649a25d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850433 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ghqx\" (UniqueName: \"kubernetes.io/projected/61c40c2c-8ac1-4398-bef9-a89917abcc44-kube-api-access-7ghqx\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850484 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjlbz\" (UniqueName: \"kubernetes.io/projected/6a326387-5c33-41e0-b73a-8670ae5b0c48-kube-api-access-mjlbz\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850503 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61c40c2c-8ac1-4398-bef9-a89917abcc44-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850517 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850530 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850548 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850560 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850572 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850584 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvmr2\" (UniqueName: \"kubernetes.io/projected/1e70861a-6514-486d-8a0d-c60d649a25d1-kube-api-access-dvmr2\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850596 4771 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850609 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850623 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61c40c2c-8ac1-4398-bef9-a89917abcc44-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850637 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e70861a-6514-486d-8a0d-c60d649a25d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.850648 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c40c2c-8ac1-4398-bef9-a89917abcc44-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.856130 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data" (OuterVolumeSpecName: "config-data") pod "6a326387-5c33-41e0-b73a-8670ae5b0c48" (UID: "6a326387-5c33-41e0-b73a-8670ae5b0c48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.860789 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5hgs2" event={"ID":"1e70861a-6514-486d-8a0d-c60d649a25d1","Type":"ContainerDied","Data":"f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08"} Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.860880 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42d91c2a483a0cf8152f431f5f3a20efe672a739d7c4ba0e9eb40c6b7a02e08" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.860818 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5hgs2" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.867069 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-mzql6" event={"ID":"6a326387-5c33-41e0-b73a-8670ae5b0c48","Type":"ContainerDied","Data":"76b5426cd44c24f5521a7ce86eed98a2ed379137203e98a8849db41760a50c68"} Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.867151 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-mzql6" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.867532 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b5426cd44c24f5521a7ce86eed98a2ed379137203e98a8849db41760a50c68" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.870385 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-666d4c989c-wvzsc" Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.871702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-666d4c989c-wvzsc" event={"ID":"61c40c2c-8ac1-4398-bef9-a89917abcc44","Type":"ContainerDied","Data":"ec45dd95af6485f158a099f9d3560c370bd9f83ea0ba3e2f33b228e60a7e23b5"} Jan 23 13:51:16 crc kubenswrapper[4771]: I0123 13:51:16.953946 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a326387-5c33-41e0-b73a-8670ae5b0c48-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.010719 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.026989 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-666d4c989c-wvzsc"] Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.243987 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c40c2c-8ac1-4398-bef9-a89917abcc44" path="/var/lib/kubelet/pods/61c40c2c-8ac1-4398-bef9-a89917abcc44/volumes" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.800367 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5hgs2"] Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.809492 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5hgs2"] Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.901388 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hbkvm"] Jan 23 13:51:17 crc kubenswrapper[4771]: E0123 13:51:17.902051 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e70861a-6514-486d-8a0d-c60d649a25d1" containerName="keystone-bootstrap" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.902073 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e70861a-6514-486d-8a0d-c60d649a25d1" containerName="keystone-bootstrap" Jan 23 13:51:17 crc kubenswrapper[4771]: E0123 13:51:17.902103 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" containerName="watcher-db-sync" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.902111 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" containerName="watcher-db-sync" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.902335 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" containerName="watcher-db-sync" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.902370 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e70861a-6514-486d-8a0d-c60d649a25d1" containerName="keystone-bootstrap" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.903224 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.911314 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.911495 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.911588 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-d69g7" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.911588 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.911592 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.923021 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbkvm"] Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982592 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv46l\" (UniqueName: \"kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982706 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982752 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982817 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:17 crc kubenswrapper[4771]: I0123 13:51:17.982929 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086218 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086299 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086370 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086787 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086811 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.086947 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv46l\" (UniqueName: \"kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.101929 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.103823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.105763 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.106264 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.123154 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv46l\" (UniqueName: \"kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.135208 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys\") pod \"keystone-bootstrap-hbkvm\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.178831 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.180367 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.184906 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-78rbh" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.185485 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.223454 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.258336 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.260471 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.261117 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.263941 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.273830 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.275684 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.284914 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.285740 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.293655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.293718 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.293847 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sszvq\" (UniqueName: \"kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.293876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.293898 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.326789 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.396606 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sszvq\" (UniqueName: \"kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.396715 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-logs\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.396750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.396783 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.396808 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397033 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45g2w\" (UniqueName: \"kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397165 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397266 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397341 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397450 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397478 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-config-data\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397830 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.397912 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkkc\" (UniqueName: \"kubernetes.io/projected/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-kube-api-access-stkkc\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.398028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.402586 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.403517 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.415157 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.417878 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sszvq\" (UniqueName: \"kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq\") pod \"watcher-decision-engine-0\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.500975 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45g2w\" (UniqueName: \"kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501094 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501152 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-config-data\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501217 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkkc\" (UniqueName: \"kubernetes.io/projected/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-kube-api-access-stkkc\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501249 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501291 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-logs\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.501317 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.502921 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.504240 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-logs\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.505897 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.506300 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.506580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.506825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.507318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.508480 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-config-data\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.523234 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkkc\" (UniqueName: \"kubernetes.io/projected/a9b26a6d-89f0-40b9-9887-5d8aedf33ad5-kube-api-access-stkkc\") pod \"watcher-applier-0\" (UID: \"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5\") " pod="openstack/watcher-applier-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.524493 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45g2w\" (UniqueName: \"kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w\") pod \"watcher-api-0\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.596187 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:18 crc kubenswrapper[4771]: I0123 13:51:18.611173 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 23 13:51:19 crc kubenswrapper[4771]: I0123 13:51:19.242319 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e70861a-6514-486d-8a0d-c60d649a25d1" path="/var/lib/kubelet/pods/1e70861a-6514-486d-8a0d-c60d649a25d1/volumes" Jan 23 13:51:22 crc kubenswrapper[4771]: I0123 13:51:22.868311 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Jan 23 13:51:22 crc kubenswrapper[4771]: I0123 13:51:22.869868 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:51:27 crc kubenswrapper[4771]: I0123 13:51:27.870563 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Jan 23 13:51:30 crc kubenswrapper[4771]: I0123 13:51:30.312789 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:51:30 crc kubenswrapper[4771]: I0123 13:51:30.313569 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:51:30 crc kubenswrapper[4771]: I0123 13:51:30.313639 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:51:30 crc kubenswrapper[4771]: I0123 13:51:30.314934 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:51:30 crc kubenswrapper[4771]: I0123 13:51:30.315052 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43" gracePeriod=600 Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.034021 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43" exitCode=0 Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.034070 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43"} Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.034132 4771 scope.go:117] "RemoveContainer" containerID="dee83f309be07e5f0f1af35989d1377c17f49b4ede91bda4763351e5bf93274d" Jan 23 13:51:31 crc kubenswrapper[4771]: E0123 13:51:31.201051 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 23 13:51:31 crc kubenswrapper[4771]: E0123 13:51:31.201130 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 23 13:51:31 crc kubenswrapper[4771]: E0123 13:51:31.201329 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.129.56.240:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdq2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-42qfl_openstack(13f63357-c0a0-49eb-9011-bd32c84f414a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:51:31 crc kubenswrapper[4771]: E0123 13:51:31.202579 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-42qfl" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.375220 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.383689 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.392273 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-47x2v" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.405772 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.422314 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data\") pod \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.422792 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs\") pod \"7aa616bb-c065-49e0-8dfc-d35709088801\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.422915 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data\") pod \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.423017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs\") pod \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.423105 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs" (OuterVolumeSpecName: "logs") pod "7aa616bb-c065-49e0-8dfc-d35709088801" (UID: "7aa616bb-c065-49e0-8dfc-d35709088801"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.423378 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs" (OuterVolumeSpecName: "logs") pod "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" (UID: "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424455 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlxk6\" (UniqueName: \"kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6\") pod \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424563 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key\") pod \"7aa616bb-c065-49e0-8dfc-d35709088801\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424660 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle\") pod \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\" (UID: \"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424737 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g65n6\" (UniqueName: \"kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6\") pod \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424819 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6c5\" (UniqueName: \"kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5\") pod \"7aa616bb-c065-49e0-8dfc-d35709088801\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.424942 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts\") pod \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.425030 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key\") pod \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.425110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data\") pod \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\" (UID: \"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.425196 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data\") pod \"7aa616bb-c065-49e0-8dfc-d35709088801\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.425270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts\") pod \"7aa616bb-c065-49e0-8dfc-d35709088801\" (UID: \"7aa616bb-c065-49e0-8dfc-d35709088801\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.426031 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7aa616bb-c065-49e0-8dfc-d35709088801-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.426166 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.426644 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts" (OuterVolumeSpecName: "scripts") pod "7aa616bb-c065-49e0-8dfc-d35709088801" (UID: "7aa616bb-c065-49e0-8dfc-d35709088801"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.427462 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data" (OuterVolumeSpecName: "config-data") pod "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" (UID: "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.428183 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data" (OuterVolumeSpecName: "config-data") pod "7aa616bb-c065-49e0-8dfc-d35709088801" (UID: "7aa616bb-c065-49e0-8dfc-d35709088801"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.429467 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts" (OuterVolumeSpecName: "scripts") pod "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" (UID: "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.441734 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5" (OuterVolumeSpecName: "kube-api-access-wf6c5") pod "7aa616bb-c065-49e0-8dfc-d35709088801" (UID: "7aa616bb-c065-49e0-8dfc-d35709088801"). InnerVolumeSpecName "kube-api-access-wf6c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.454067 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6" (OuterVolumeSpecName: "kube-api-access-g65n6") pod "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" (UID: "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4"). InnerVolumeSpecName "kube-api-access-g65n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.454210 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" (UID: "8b6f8f8d-bcb6-481f-94d6-c82918ed42f4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.454287 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7aa616bb-c065-49e0-8dfc-d35709088801" (UID: "7aa616bb-c065-49e0-8dfc-d35709088801"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.455167 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" (UID: "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.456534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6" (OuterVolumeSpecName: "kube-api-access-dlxk6") pod "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" (UID: "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06"). InnerVolumeSpecName "kube-api-access-dlxk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.519320 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" (UID: "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.523662 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data" (OuterVolumeSpecName: "config-data") pod "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" (UID: "5aa3ff81-43f1-4fcb-8c40-95d7aa786a06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.527555 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.527711 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq2bv\" (UniqueName: \"kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.527786 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.527916 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.527941 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.528149 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config\") pod \"486a5657-e74a-4037-9ec1-52b56b74bb1e\" (UID: \"486a5657-e74a-4037-9ec1-52b56b74bb1e\") " Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529039 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlxk6\" (UniqueName: \"kubernetes.io/projected/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-kube-api-access-dlxk6\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529060 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7aa616bb-c065-49e0-8dfc-d35709088801-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529071 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529080 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g65n6\" (UniqueName: \"kubernetes.io/projected/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-kube-api-access-g65n6\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529107 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6c5\" (UniqueName: \"kubernetes.io/projected/7aa616bb-c065-49e0-8dfc-d35709088801-kube-api-access-wf6c5\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529120 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529130 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529139 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529149 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529158 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7aa616bb-c065-49e0-8dfc-d35709088801-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529186 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.529197 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.533091 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv" (OuterVolumeSpecName: "kube-api-access-dq2bv") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "kube-api-access-dq2bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.595265 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.605342 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config" (OuterVolumeSpecName: "config") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.605341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.615272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.620813 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "486a5657-e74a-4037-9ec1-52b56b74bb1e" (UID: "486a5657-e74a-4037-9ec1-52b56b74bb1e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635309 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635350 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635359 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635371 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635380 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq2bv\" (UniqueName: \"kubernetes.io/projected/486a5657-e74a-4037-9ec1-52b56b74bb1e-kube-api-access-dq2bv\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:31 crc kubenswrapper[4771]: I0123 13:51:31.635391 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/486a5657-e74a-4037-9ec1-52b56b74bb1e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.046989 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-47x2v" event={"ID":"5aa3ff81-43f1-4fcb-8c40-95d7aa786a06","Type":"ContainerDied","Data":"02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6"} Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.047392 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d7202890419f265f813479ccae571b5c66870875f13513de6d675a3a09cae6" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.047496 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-47x2v" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.053645 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9987b5459-gpn75" event={"ID":"8b6f8f8d-bcb6-481f-94d6-c82918ed42f4","Type":"ContainerDied","Data":"5a2d0e86b8c0ec8cdb69cba4f4b7e5d62675e8a8c49823222e05c963e0cc3931"} Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.053672 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9987b5459-gpn75" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.055557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f9546c5f-5w2td" event={"ID":"7aa616bb-c065-49e0-8dfc-d35709088801","Type":"ContainerDied","Data":"882f9f8cb6b7dba947a68d4de646702e9e19a5a043dac2eafcdea8baa16c1683"} Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.055692 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f9546c5f-5w2td" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.059246 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57589d46b9-454st" event={"ID":"486a5657-e74a-4037-9ec1-52b56b74bb1e","Type":"ContainerDied","Data":"fbefe440071583a54a0f6b7eda00963084fb853434205bad7204f06af14b030f"} Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.059381 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57589d46b9-454st" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.060695 4771 generic.go:334] "Generic (PLEG): container finished" podID="e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" containerID="97c11b5aaa731ba63fb076b57926582f57dedfc2ab9ddff5231e4899e9baa2cd" exitCode=0 Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.061718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8cggt" event={"ID":"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc","Type":"ContainerDied","Data":"97c11b5aaa731ba63fb076b57926582f57dedfc2ab9ddff5231e4899e9baa2cd"} Jan 23 13:51:32 crc kubenswrapper[4771]: E0123 13:51:32.064355 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-42qfl" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.162790 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.195599 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57589d46b9-454st"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.223682 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.234216 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f9546c5f-5w2td"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.252797 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.261607 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9987b5459-gpn75"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.855973 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:32 crc kubenswrapper[4771]: E0123 13:51:32.857298 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" containerName="glance-db-sync" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.857465 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" containerName="glance-db-sync" Jan 23 13:51:32 crc kubenswrapper[4771]: E0123 13:51:32.857563 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.857652 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" Jan 23 13:51:32 crc kubenswrapper[4771]: E0123 13:51:32.857767 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="init" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.858238 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="init" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.858724 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.858849 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" containerName="glance-db-sync" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.860470 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.875587 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57589d46b9-454st" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.897227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964660 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964716 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964748 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964796 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964834 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:32 crc kubenswrapper[4771]: I0123 13:51:32.964887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgr6\" (UniqueName: \"kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: E0123 13:51:33.067050 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 23 13:51:33 crc kubenswrapper[4771]: E0123 13:51:33.067120 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 23 13:51:33 crc kubenswrapper[4771]: E0123 13:51:33.067319 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.129.56.240:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbqcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d7jd6_openstack(506b2de1-f73d-4781-a52d-3f622c78660d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.068516 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: E0123 13:51:33.068581 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d7jd6" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.068621 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgr6\" (UniqueName: \"kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.068909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.068963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.069021 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.069155 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.069813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.069903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.070107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.070126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.070851 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.108990 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgr6\" (UniqueName: \"kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6\") pod \"dnsmasq-dns-7767d4d74c-nw584\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.195277 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.259393 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486a5657-e74a-4037-9ec1-52b56b74bb1e" path="/var/lib/kubelet/pods/486a5657-e74a-4037-9ec1-52b56b74bb1e/volumes" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.260317 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa616bb-c065-49e0-8dfc-d35709088801" path="/var/lib/kubelet/pods/7aa616bb-c065-49e0-8dfc-d35709088801/volumes" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.261059 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b6f8f8d-bcb6-481f-94d6-c82918ed42f4" path="/var/lib/kubelet/pods/8b6f8f8d-bcb6-481f-94d6-c82918ed42f4/volumes" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.289551 4771 scope.go:117] "RemoveContainer" containerID="46cd01ddc1a2d8b0a702f6385d53048e7ba1fd7a2a4e12a507fd00576dd5c2e6" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.553903 4771 scope.go:117] "RemoveContainer" containerID="5cf5af7b6aa72aa9cbb95e293efb5512b0dacdd5956016eb56d678702f021aea" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.770161 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8cggt" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.801353 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config\") pod \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.801478 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle\") pod \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.801755 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmvtw\" (UniqueName: \"kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw\") pod \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\" (UID: \"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc\") " Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.833782 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw" (OuterVolumeSpecName: "kube-api-access-vmvtw") pod "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" (UID: "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc"). InnerVolumeSpecName "kube-api-access-vmvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.858623 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config" (OuterVolumeSpecName: "config") pod "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" (UID: "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.905114 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmvtw\" (UniqueName: \"kubernetes.io/projected/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-kube-api-access-vmvtw\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.905539 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.919557 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" (UID: "e7fefc0e-a90c-4550-8f94-6e392f6bc6fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.933494 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.990528 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:33 crc kubenswrapper[4771]: E0123 13:51:33.991450 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" containerName="neutron-db-sync" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.991563 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" containerName="neutron-db-sync" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.992361 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" containerName="neutron-db-sync" Jan 23 13:51:33 crc kubenswrapper[4771]: I0123 13:51:33.993808 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.008048 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.016858 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j5h54" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.017840 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.017982 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.027225 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.153517 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.155726 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.158040 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d"} Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.169919 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.179904 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8cggt" event={"ID":"e7fefc0e-a90c-4550-8f94-6e392f6bc6fc","Type":"ContainerDied","Data":"2e8c7d1adef5607cab52f5342a3db82f9d45cd78f483f1436c963748326455e2"} Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.179952 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e8c7d1adef5607cab52f5342a3db82f9d45cd78f483f1436c963748326455e2" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.180018 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8cggt" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.201091 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerStarted","Data":"a7a3f4a5fe6a5bc9b065d9078be9ea7ae9d5f39fd5ad1fd797283db145473632"} Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.209257 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.223954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.224343 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.224470 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxpc\" (UniqueName: \"kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.224627 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.224763 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.225188 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.225520 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.237550 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerStarted","Data":"b000fac6131a392f545af2ebb68e9ce0cf352e051176ce409883d5161c6a4615"} Jan 23 13:51:34 crc kubenswrapper[4771]: E0123 13:51:34.322233 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-d7jd6" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340081 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340290 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340354 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340400 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340540 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmmt\" (UniqueName: \"kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340587 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340713 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340777 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxpc\" (UniqueName: \"kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340822 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.340864 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.345900 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.348648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.357926 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.365560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.366957 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.374828 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.414183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxpc\" (UniqueName: \"kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.458019 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmmt\" (UniqueName: \"kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.458477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.459048 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.472700 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.475488 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.475771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.475937 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.476190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.477704 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.478148 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.479688 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.481201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.514478 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmmt\" (UniqueName: \"kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.515160 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.562255 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.563499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.579476 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57cbdcc8d-5lcfn"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.615024 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.617702 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.640517 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.653856 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.672880 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.685285 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787681 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787715 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bgr\" (UniqueName: \"kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787740 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787843 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.787871 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897257 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897485 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bgr\" (UniqueName: \"kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897641 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897892 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.897949 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.899438 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.900097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.901879 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.902520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.907376 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.907482 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.909383 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.924326 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.924842 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.925130 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.922589 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-n2nlk" Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.956152 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:51:34 crc kubenswrapper[4771]: I0123 13:51:34.975489 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bgr\" (UniqueName: \"kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr\") pod \"dnsmasq-dns-5c7f8875f-c2rr7\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.005907 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.006316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.008146 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.008396 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.008510 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nsc\" (UniqueName: \"kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.009174 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.072551 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.104034 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.111129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.111243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.111323 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.111449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.118275 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nsc\" (UniqueName: \"kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.143192 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.170205 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.170738 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.173203 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: W0123 13:51:35.177272 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebafbd30_6f52_4209_b962_c97da4d4f9da.slice/crio-0fff1bc76cda7afdb87b26a2a56617707fa1eee249a18f8a6561ae5dcac73214 WatchSource:0}: Error finding container 0fff1bc76cda7afdb87b26a2a56617707fa1eee249a18f8a6561ae5dcac73214: Status 404 returned error can't find the container with id 0fff1bc76cda7afdb87b26a2a56617707fa1eee249a18f8a6561ae5dcac73214 Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.215306 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nsc\" (UniqueName: \"kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc\") pod \"neutron-5fcb4fcfd8-xrpf8\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.309183 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.319689 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.363277 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.390611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cz4ft" event={"ID":"8fcfc471-7906-46f5-9238-4d66823ca1bf","Type":"ContainerStarted","Data":"92701b8d11acf8cca66d4b3e154f17a6044b28ecdabb713991336418b4fe8a9d"} Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.422098 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57cbdcc8d-5lcfn" event={"ID":"dd12560a-7353-492b-8037-822d7aceb4e0","Type":"ContainerStarted","Data":"0db1483d52680c58b7ba5f1f05c7671e532bcaebc08b5643e6de5540b9ff49fb"} Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.443820 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-cz4ft" podStartSLOduration=6.979742224 podStartE2EDuration="40.443790455s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="2026-01-23 13:50:57.772058011 +0000 UTC m=+1098.794595636" lastFinishedPulling="2026-01-23 13:51:31.236106232 +0000 UTC m=+1132.258643867" observedRunningTime="2026-01-23 13:51:35.415956753 +0000 UTC m=+1136.438494398" watchObservedRunningTime="2026-01-23 13:51:35.443790455 +0000 UTC m=+1136.466328080" Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.491043 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" event={"ID":"33c4bc1b-bf2e-4053-8c0c-54a34a901efb","Type":"ContainerStarted","Data":"dee0644b896399d3c68b615b82d054c884b387f827ed2d7a1bb74ba57ca8fe0b"} Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.533305 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hbkvm"] Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.537328 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerStarted","Data":"2afd31066240e94aa240c0d85614a362530e127adcbc5ed5dbe9b1eaade7ebfd"} Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.562172 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerStarted","Data":"0fff1bc76cda7afdb87b26a2a56617707fa1eee249a18f8a6561ae5dcac73214"} Jan 23 13:51:35 crc kubenswrapper[4771]: W0123 13:51:35.646685 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8fe2dfb_8c93_4c82_bbc8_b24a3b6c6515.slice/crio-d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81 WatchSource:0}: Error finding container d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81: Status 404 returned error can't find the container with id d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81 Jan 23 13:51:35 crc kubenswrapper[4771]: I0123 13:51:35.905784 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.008868 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.156658 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:36 crc kubenswrapper[4771]: W0123 13:51:36.230715 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fc4c99f_1a0f_4905_868c_a4c0a67cf034.slice/crio-6cd96803c13977ebc4a770bbf4fdf337731f217e1440f7cfa9095f4d300a28a3 WatchSource:0}: Error finding container 6cd96803c13977ebc4a770bbf4fdf337731f217e1440f7cfa9095f4d300a28a3: Status 404 returned error can't find the container with id 6cd96803c13977ebc4a770bbf4fdf337731f217e1440f7cfa9095f4d300a28a3 Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.641431 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerStarted","Data":"5fb33c8981a46ccd45ccd00cb9b7ad5109b5313c22819c787e670c92b9595899"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.642273 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerStarted","Data":"4677c51e504d11f4dac31eca67e8893575ff123973cd6efe0aa1e06e23177951"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.649674 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57cbdcc8d-5lcfn" event={"ID":"dd12560a-7353-492b-8037-822d7aceb4e0","Type":"ContainerStarted","Data":"aa0ade2765e3521d223196cad3905f370e27bf8aab3cc5f93bf950006057104e"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.693578 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerStarted","Data":"c317b37309f1dd8f35ba92e9d8dfde672279d895829ca97dce9a0fbfdaa0aa69"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.706704 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbkvm" event={"ID":"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515","Type":"ContainerStarted","Data":"d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.717824 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.744620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" event={"ID":"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18","Type":"ContainerStarted","Data":"26a81e5ae7895a0a0c6748b97f7f388df8f3ebfa113b1be9765757d3de517beb"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.759976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5","Type":"ContainerStarted","Data":"498928a5ae3764b49a807a80ce4cfb4a9666331bf0c6ff8a309cdbed784de6be"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.771464 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-99f77f8d8-2j9s2" podStartSLOduration=32.60689528 podStartE2EDuration="32.771439333s" podCreationTimestamp="2026-01-23 13:51:04 +0000 UTC" firstStartedPulling="2026-01-23 13:51:34.016765139 +0000 UTC m=+1135.039302764" lastFinishedPulling="2026-01-23 13:51:34.181309192 +0000 UTC m=+1135.203846817" observedRunningTime="2026-01-23 13:51:36.752732821 +0000 UTC m=+1137.775270446" watchObservedRunningTime="2026-01-23 13:51:36.771439333 +0000 UTC m=+1137.793976958" Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.779324 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerStarted","Data":"0551c4fca40fad3fa124b5a8e869fb6a3ecf3e4360e5c9901792635f18d37813"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.804077 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" event={"ID":"33c4bc1b-bf2e-4053-8c0c-54a34a901efb","Type":"ContainerStarted","Data":"cabd9998636e96e452e80ad698f9613b41fbb3d501f4173099d358be6835a6f8"} Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.804285 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" podUID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" containerName="init" containerID="cri-o://cabd9998636e96e452e80ad698f9613b41fbb3d501f4173099d358be6835a6f8" gracePeriod=10 Jan 23 13:51:36 crc kubenswrapper[4771]: I0123 13:51:36.828616 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerStarted","Data":"6cd96803c13977ebc4a770bbf4fdf337731f217e1440f7cfa9095f4d300a28a3"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.342983 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.480322 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.876639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerStarted","Data":"b836a0d0efd333e75b5c1fe07f5044a162ece8005a9eae7e94a569cee5821a88"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.877071 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerStarted","Data":"82a483c81bf1c7e9df33d3b3fe6aa29d03bd3b04d0ed2246e56369acee56a99d"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.897615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbkvm" event={"ID":"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515","Type":"ContainerStarted","Data":"505a60d53313a2366a13eef0c7e455aa06a9618df0087a01c25029baeac3749f"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.902875 4771 generic.go:334] "Generic (PLEG): container finished" podID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerID="dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8" exitCode=0 Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.902973 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" event={"ID":"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18","Type":"ContainerDied","Data":"dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.911347 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerStarted","Data":"b0c4d637ac8b9f1b7f2d46bac75336fe7d95abe8ebe24519f440d64402df3d74"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.913532 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.915754 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": dial tcp 10.217.0.171:9322: connect: connection refused" Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.927246 4771 generic.go:334] "Generic (PLEG): container finished" podID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" containerID="cabd9998636e96e452e80ad698f9613b41fbb3d501f4173099d358be6835a6f8" exitCode=0 Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.927319 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" event={"ID":"33c4bc1b-bf2e-4053-8c0c-54a34a901efb","Type":"ContainerDied","Data":"cabd9998636e96e452e80ad698f9613b41fbb3d501f4173099d358be6835a6f8"} Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.938280 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hbkvm" podStartSLOduration=20.938255166 podStartE2EDuration="20.938255166s" podCreationTimestamp="2026-01-23 13:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:37.921508935 +0000 UTC m=+1138.944046560" watchObservedRunningTime="2026-01-23 13:51:37.938255166 +0000 UTC m=+1138.960792781" Jan 23 13:51:37 crc kubenswrapper[4771]: I0123 13:51:37.939721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57cbdcc8d-5lcfn" event={"ID":"dd12560a-7353-492b-8037-822d7aceb4e0","Type":"ContainerStarted","Data":"d44456d418066df8ef78779ef15446f4d90b7c9f5bc0b4e38348fc697acd8824"} Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.019885 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=20.019860031 podStartE2EDuration="20.019860031s" podCreationTimestamp="2026-01-23 13:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:37.967333177 +0000 UTC m=+1138.989870822" watchObservedRunningTime="2026-01-23 13:51:38.019860031 +0000 UTC m=+1139.042397656" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.049164 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-57cbdcc8d-5lcfn" podStartSLOduration=34.049134629 podStartE2EDuration="34.049134629s" podCreationTimestamp="2026-01-23 13:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:38.018620282 +0000 UTC m=+1139.041157907" watchObservedRunningTime="2026-01-23 13:51:38.049134629 +0000 UTC m=+1139.071672254" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.596779 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.597842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.800053 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.803813 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.807912 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.810051 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.824824 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.895876 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.895946 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.896020 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.896093 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp52f\" (UniqueName: \"kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.896200 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.896402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.896593 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.998699 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp52f\" (UniqueName: \"kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.998786 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.998881 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.998934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.999018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.999089 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:38 crc kubenswrapper[4771]: I0123 13:51:38.999172 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.012522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.016103 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.016292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.022663 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.022976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerStarted","Data":"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93"} Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.029230 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.029572 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerStarted","Data":"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832"} Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.044326 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.049218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp52f\" (UniqueName: \"kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f\") pod \"neutron-585dc95dd9-rtq4d\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.161467 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:39 crc kubenswrapper[4771]: I0123 13:51:39.637657 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.050566 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.050638 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" event={"ID":"33c4bc1b-bf2e-4053-8c0c-54a34a901efb","Type":"ContainerDied","Data":"dee0644b896399d3c68b615b82d054c884b387f827ed2d7a1bb74ba57ca8fe0b"} Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.052356 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee0644b896399d3c68b615b82d054c884b387f827ed2d7a1bb74ba57ca8fe0b" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.133146 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.242598 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgr6\" (UniqueName: \"kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.242784 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.242913 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.242948 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.242972 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.243101 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.279144 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6" (OuterVolumeSpecName: "kube-api-access-pkgr6") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "kube-api-access-pkgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.325768 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.325980 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.325998 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config" (OuterVolumeSpecName: "config") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.326370 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.345698 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.346793 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") pod \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\" (UID: \"33c4bc1b-bf2e-4053-8c0c-54a34a901efb\") " Jan 23 13:51:40 crc kubenswrapper[4771]: W0123 13:51:40.347784 4771 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/33c4bc1b-bf2e-4053-8c0c-54a34a901efb/volumes/kubernetes.io~configmap/dns-svc Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.347814 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33c4bc1b-bf2e-4053-8c0c-54a34a901efb" (UID: "33c4bc1b-bf2e-4053-8c0c-54a34a901efb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348427 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348460 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgr6\" (UniqueName: \"kubernetes.io/projected/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-kube-api-access-pkgr6\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348481 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348494 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348506 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:40 crc kubenswrapper[4771]: I0123 13:51:40.348517 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c4bc1b-bf2e-4053-8c0c-54a34a901efb-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:41 crc kubenswrapper[4771]: I0123 13:51:41.060487 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7767d4d74c-nw584" Jan 23 13:51:41 crc kubenswrapper[4771]: I0123 13:51:41.135478 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:41 crc kubenswrapper[4771]: I0123 13:51:41.143833 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7767d4d74c-nw584"] Jan 23 13:51:41 crc kubenswrapper[4771]: I0123 13:51:41.243591 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" path="/var/lib/kubelet/pods/33c4bc1b-bf2e-4053-8c0c-54a34a901efb/volumes" Jan 23 13:51:42 crc kubenswrapper[4771]: I0123 13:51:42.109204 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 13:51:42 crc kubenswrapper[4771]: I0123 13:51:42.949255 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:51:42 crc kubenswrapper[4771]: W0123 13:51:42.995766 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd25f0ad_f4d3_4333_8803_cc30734719f9.slice/crio-792bf2c3f27288a7be3ed2506cd6b611c6aa43c5f6f34e75c5f054b8541585e0 WatchSource:0}: Error finding container 792bf2c3f27288a7be3ed2506cd6b611c6aa43c5f6f34e75c5f054b8541585e0: Status 404 returned error can't find the container with id 792bf2c3f27288a7be3ed2506cd6b611c6aa43c5f6f34e75c5f054b8541585e0 Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.085257 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerStarted","Data":"8762cfdc1fa334b043c97625e1fc97b183fd3bfeb09d437515b21cff0c5aa955"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.086822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"a9b26a6d-89f0-40b9-9887-5d8aedf33ad5","Type":"ContainerStarted","Data":"009362c1aef170946dd1fa705d9b43250eabe93580bb252cdac23ede8cb2d400"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.088459 4771 generic.go:334] "Generic (PLEG): container finished" podID="8fcfc471-7906-46f5-9238-4d66823ca1bf" containerID="92701b8d11acf8cca66d4b3e154f17a6044b28ecdabb713991336418b4fe8a9d" exitCode=0 Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.088533 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cz4ft" event={"ID":"8fcfc471-7906-46f5-9238-4d66823ca1bf","Type":"ContainerDied","Data":"92701b8d11acf8cca66d4b3e154f17a6044b28ecdabb713991336418b4fe8a9d"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.091228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerStarted","Data":"792bf2c3f27288a7be3ed2506cd6b611c6aa43c5f6f34e75c5f054b8541585e0"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.109396 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerStarted","Data":"093e37b9fead0067733413c974f0062960f36dd697f478b80976511b3346bc5c"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.110476 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.116334 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=18.289809657 podStartE2EDuration="25.116314469s" podCreationTimestamp="2026-01-23 13:51:18 +0000 UTC" firstStartedPulling="2026-01-23 13:51:35.5351597 +0000 UTC m=+1136.557697325" lastFinishedPulling="2026-01-23 13:51:42.361664512 +0000 UTC m=+1143.384202137" observedRunningTime="2026-01-23 13:51:43.109508693 +0000 UTC m=+1144.132046338" watchObservedRunningTime="2026-01-23 13:51:43.116314469 +0000 UTC m=+1144.138852094" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.125352 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerStarted","Data":"936ab2aa56733430a3b6235b328e932f1daa5e0c41b231ba40ea1373444cc2b5"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.139118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" event={"ID":"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18","Type":"ContainerStarted","Data":"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80"} Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.140074 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.195065 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=18.044855727 podStartE2EDuration="25.195029413s" podCreationTimestamp="2026-01-23 13:51:18 +0000 UTC" firstStartedPulling="2026-01-23 13:51:35.213109848 +0000 UTC m=+1136.235647473" lastFinishedPulling="2026-01-23 13:51:42.363283534 +0000 UTC m=+1143.385821159" observedRunningTime="2026-01-23 13:51:43.155196731 +0000 UTC m=+1144.177734356" watchObservedRunningTime="2026-01-23 13:51:43.195029413 +0000 UTC m=+1144.217567048" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.238814 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fcb4fcfd8-xrpf8" podStartSLOduration=9.238793519 podStartE2EDuration="9.238793519s" podCreationTimestamp="2026-01-23 13:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:43.175559045 +0000 UTC m=+1144.198096690" watchObservedRunningTime="2026-01-23 13:51:43.238793519 +0000 UTC m=+1144.261331134" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.301643 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" podStartSLOduration=9.301605758000001 podStartE2EDuration="9.301605758s" podCreationTimestamp="2026-01-23 13:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:43.201797607 +0000 UTC m=+1144.224335232" watchObservedRunningTime="2026-01-23 13:51:43.301605758 +0000 UTC m=+1144.324143393" Jan 23 13:51:43 crc kubenswrapper[4771]: I0123 13:51:43.611397 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.157641 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerStarted","Data":"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901"} Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.157915 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-log" containerID="cri-o://aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" gracePeriod=30 Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.158013 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-httpd" containerID="cri-o://7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" gracePeriod=30 Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.172710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerStarted","Data":"7868613dd37b3a8d5277e8a37e8b3d59908a9becb555d7bc67322f4f77e0f548"} Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.173260 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerStarted","Data":"10e59b1f8efabf7952b88b4340e3e30fd7d851847aa43c0d75748b5d89eb2677"} Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.174648 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.187208 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-log" containerID="cri-o://735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" gracePeriod=30 Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.187656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerStarted","Data":"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15"} Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.187747 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-httpd" containerID="cri-o://42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" gracePeriod=30 Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.193457 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.193435821 podStartE2EDuration="11.193435821s" podCreationTimestamp="2026-01-23 13:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:44.188235446 +0000 UTC m=+1145.210773081" watchObservedRunningTime="2026-01-23 13:51:44.193435821 +0000 UTC m=+1145.215973456" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.214290 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-585dc95dd9-rtq4d" podStartSLOduration=6.21426836 podStartE2EDuration="6.21426836s" podCreationTimestamp="2026-01-23 13:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:44.213148455 +0000 UTC m=+1145.235686080" watchObservedRunningTime="2026-01-23 13:51:44.21426836 +0000 UTC m=+1145.236805985" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.256580 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.25655881 podStartE2EDuration="12.25655881s" podCreationTimestamp="2026-01-23 13:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:44.253659898 +0000 UTC m=+1145.276197513" watchObservedRunningTime="2026-01-23 13:51:44.25655881 +0000 UTC m=+1145.279096435" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.508942 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.509026 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.625393 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.626017 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.774615 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cz4ft" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.884300 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle\") pod \"8fcfc471-7906-46f5-9238-4d66823ca1bf\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.884472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data\") pod \"8fcfc471-7906-46f5-9238-4d66823ca1bf\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.884548 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kblgt\" (UniqueName: \"kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt\") pod \"8fcfc471-7906-46f5-9238-4d66823ca1bf\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.884623 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts\") pod \"8fcfc471-7906-46f5-9238-4d66823ca1bf\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.884795 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs\") pod \"8fcfc471-7906-46f5-9238-4d66823ca1bf\" (UID: \"8fcfc471-7906-46f5-9238-4d66823ca1bf\") " Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.886105 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs" (OuterVolumeSpecName: "logs") pod "8fcfc471-7906-46f5-9238-4d66823ca1bf" (UID: "8fcfc471-7906-46f5-9238-4d66823ca1bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.896572 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt" (OuterVolumeSpecName: "kube-api-access-kblgt") pod "8fcfc471-7906-46f5-9238-4d66823ca1bf" (UID: "8fcfc471-7906-46f5-9238-4d66823ca1bf"). InnerVolumeSpecName "kube-api-access-kblgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.901927 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts" (OuterVolumeSpecName: "scripts") pod "8fcfc471-7906-46f5-9238-4d66823ca1bf" (UID: "8fcfc471-7906-46f5-9238-4d66823ca1bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.931625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data" (OuterVolumeSpecName: "config-data") pod "8fcfc471-7906-46f5-9238-4d66823ca1bf" (UID: "8fcfc471-7906-46f5-9238-4d66823ca1bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.960849 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fcfc471-7906-46f5-9238-4d66823ca1bf" (UID: "8fcfc471-7906-46f5-9238-4d66823ca1bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.987635 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.987710 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.987722 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kblgt\" (UniqueName: \"kubernetes.io/projected/8fcfc471-7906-46f5-9238-4d66823ca1bf-kube-api-access-kblgt\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.987783 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fcfc471-7906-46f5-9238-4d66823ca1bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:44 crc kubenswrapper[4771]: I0123 13:51:44.987793 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fcfc471-7906-46f5-9238-4d66823ca1bf-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.075191 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.187311 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191298 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxpc\" (UniqueName: \"kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191367 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191394 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191490 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191530 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191632 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.191693 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"25fccea2-7bcf-4d30-a672-5590611ab0b1\" (UID: \"25fccea2-7bcf-4d30-a672-5590611ab0b1\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.192793 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.193008 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs" (OuterVolumeSpecName: "logs") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.211392 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.211569 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc" (OuterVolumeSpecName: "kube-api-access-nwxpc") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "kube-api-access-nwxpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.211659 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts" (OuterVolumeSpecName: "scripts") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.242675 4771 generic.go:334] "Generic (PLEG): container finished" podID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerID="7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" exitCode=0 Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.242717 4771 generic.go:334] "Generic (PLEG): container finished" podID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerID="aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" exitCode=143 Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.242857 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.253165 4771 generic.go:334] "Generic (PLEG): container finished" podID="d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" containerID="505a60d53313a2366a13eef0c7e455aa06a9618df0087a01c25029baeac3749f" exitCode=0 Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.257775 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cz4ft" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.266141 4771 generic.go:334] "Generic (PLEG): container finished" podID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerID="42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" exitCode=143 Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.266361 4771 generic.go:334] "Generic (PLEG): container finished" podID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerID="735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" exitCode=143 Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.268500 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.295434 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.295640 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcmmt\" (UniqueName: \"kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.295698 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.295805 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.296046 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.296201 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.296233 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts\") pod \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\" (UID: \"4fc4c99f-1a0f-4905-868c-a4c0a67cf034\") " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.298993 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxpc\" (UniqueName: \"kubernetes.io/projected/25fccea2-7bcf-4d30-a672-5590611ab0b1-kube-api-access-nwxpc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.299046 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.299064 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.299079 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25fccea2-7bcf-4d30-a672-5590611ab0b1-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.299113 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.313116 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs" (OuterVolumeSpecName: "logs") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.319843 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.368371 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.371069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts" (OuterVolumeSpecName: "scripts") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.375031 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt" (OuterVolumeSpecName: "kube-api-access-qcmmt") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "kube-api-access-qcmmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.398548 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data" (OuterVolumeSpecName: "config-data") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.400973 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.401008 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.401035 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.401047 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcmmt\" (UniqueName: \"kubernetes.io/projected/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-kube-api-access-qcmmt\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.401059 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.401069 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.414117 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.419978 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25fccea2-7bcf-4d30-a672-5590611ab0b1" (UID: "25fccea2-7bcf-4d30-a672-5590611ab0b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.469494 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.469617 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.499922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data" (OuterVolumeSpecName: "config-data") pod "4fc4c99f-1a0f-4905-868c-a4c0a67cf034" (UID: "4fc4c99f-1a0f-4905-868c-a4c0a67cf034"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.506257 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fccea2-7bcf-4d30-a672-5590611ab0b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.506308 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.506332 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.506344 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc4c99f-1a0f-4905-868c-a4c0a67cf034-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.506355 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.629895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerDied","Data":"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.631857 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerDied","Data":"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.632610 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fc4c99f-1a0f-4905-868c-a4c0a67cf034","Type":"ContainerDied","Data":"6cd96803c13977ebc4a770bbf4fdf337731f217e1440f7cfa9095f4d300a28a3"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.632689 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbkvm" event={"ID":"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515","Type":"ContainerDied","Data":"505a60d53313a2366a13eef0c7e455aa06a9618df0087a01c25029baeac3749f"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.632765 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cz4ft" event={"ID":"8fcfc471-7906-46f5-9238-4d66823ca1bf","Type":"ContainerDied","Data":"0f0564fa4de829f2baf1f6af27c4ec153ec5b73666c7460b7d5293178ec9f992"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.632854 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f0564fa4de829f2baf1f6af27c4ec153ec5b73666c7460b7d5293178ec9f992" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.632935 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5997f6f59b-xjrp4"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.633055 4771 scope.go:117] "RemoveContainer" containerID="7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.634044 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.634141 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.634248 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.634307 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.634390 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.635067 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.635187 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" containerName="init" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.635299 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" containerName="init" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.635374 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.635492 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: E0123 13:51:45.635579 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fcfc471-7906-46f5-9238-4d66823ca1bf" containerName="placement-db-sync" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.635644 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fcfc471-7906-46f5-9238-4d66823ca1bf" containerName="placement-db-sync" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636117 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636204 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636284 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" containerName="glance-httpd" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636375 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c4bc1b-bf2e-4053-8c0c-54a34a901efb" containerName="init" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636468 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fcfc471-7906-46f5-9238-4d66823ca1bf" containerName="placement-db-sync" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.636551 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" containerName="glance-log" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.638608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerDied","Data":"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.638711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerDied","Data":"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.638801 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25fccea2-7bcf-4d30-a672-5590611ab0b1","Type":"ContainerDied","Data":"0551c4fca40fad3fa124b5a8e869fb6a3ecf3e4360e5c9901792635f18d37813"} Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.638907 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5997f6f59b-xjrp4"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.639121 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.642978 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.644934 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-68sv2" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.648882 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.649277 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.649535 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713511 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-config-data\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713605 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/252f0b58-bb25-4d24-98a2-22cde8bb2daf-logs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713667 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-public-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713803 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-scripts\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713832 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-combined-ca-bundle\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.713953 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-internal-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.714036 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqf6v\" (UniqueName: \"kubernetes.io/projected/252f0b58-bb25-4d24-98a2-22cde8bb2daf-kube-api-access-rqf6v\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.815962 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-internal-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqf6v\" (UniqueName: \"kubernetes.io/projected/252f0b58-bb25-4d24-98a2-22cde8bb2daf-kube-api-access-rqf6v\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816399 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-config-data\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/252f0b58-bb25-4d24-98a2-22cde8bb2daf-logs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-public-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-combined-ca-bundle\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.816768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-scripts\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.821051 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/252f0b58-bb25-4d24-98a2-22cde8bb2daf-logs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.870372 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-scripts\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.885594 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-internal-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.886114 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-combined-ca-bundle\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.886156 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-config-data\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.886268 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.886461 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/252f0b58-bb25-4d24-98a2-22cde8bb2daf-public-tls-certs\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.889924 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqf6v\" (UniqueName: \"kubernetes.io/projected/252f0b58-bb25-4d24-98a2-22cde8bb2daf-kube-api-access-rqf6v\") pod \"placement-5997f6f59b-xjrp4\" (UID: \"252f0b58-bb25-4d24-98a2-22cde8bb2daf\") " pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.903044 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.932676 4771 scope.go:117] "RemoveContainer" containerID="aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.944627 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.946671 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.953843 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-j5h54" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.954072 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.954344 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.954549 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.969924 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:45 crc kubenswrapper[4771]: I0123 13:51:45.970086 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.019767 4771 scope.go:117] "RemoveContainer" containerID="7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022401 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022502 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022579 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022599 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022628 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022658 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6677p\" (UniqueName: \"kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022713 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.022733 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: E0123 13:51:46.024599 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901\": container with ID starting with 7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901 not found: ID does not exist" containerID="7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.024661 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901"} err="failed to get container status \"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901\": rpc error: code = NotFound desc = could not find container \"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901\": container with ID starting with 7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.024699 4771 scope.go:117] "RemoveContainer" containerID="aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" Jan 23 13:51:46 crc kubenswrapper[4771]: E0123 13:51:46.025444 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93\": container with ID starting with aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93 not found: ID does not exist" containerID="aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.025490 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93"} err="failed to get container status \"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93\": rpc error: code = NotFound desc = could not find container \"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93\": container with ID starting with aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.025524 4771 scope.go:117] "RemoveContainer" containerID="7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.026207 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901"} err="failed to get container status \"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901\": rpc error: code = NotFound desc = could not find container \"7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901\": container with ID starting with 7cf8db0122c3878a57de057ef0b74b87c8fc65c84e4be37e6eadc65e972d3901 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.026226 4771 scope.go:117] "RemoveContainer" containerID="aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.026767 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93"} err="failed to get container status \"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93\": rpc error: code = NotFound desc = could not find container \"aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93\": container with ID starting with aa421ec4cc1e068f699554e21114e4cadd75af00080e0d468712adf782854d93 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.026782 4771 scope.go:117] "RemoveContainer" containerID="42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.036466 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.061189 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.074679 4771 scope.go:117] "RemoveContainer" containerID="735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.077476 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.079467 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.085232 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.085624 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.090805 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.124964 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125039 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125130 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125153 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125187 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125223 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6677p\" (UniqueName: \"kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125349 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125369 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvmsn\" (UniqueName: \"kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.125392 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.133887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.133935 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.133976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.134689 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.136296 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.141595 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.144870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.146402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.146891 4771 scope.go:117] "RemoveContainer" containerID="42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.147062 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: E0123 13:51:46.147519 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15\": container with ID starting with 42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15 not found: ID does not exist" containerID="42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.147562 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15"} err="failed to get container status \"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15\": rpc error: code = NotFound desc = could not find container \"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15\": container with ID starting with 42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.147588 4771 scope.go:117] "RemoveContainer" containerID="735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.148105 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: E0123 13:51:46.148888 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832\": container with ID starting with 735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832 not found: ID does not exist" containerID="735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.148914 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832"} err="failed to get container status \"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832\": rpc error: code = NotFound desc = could not find container \"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832\": container with ID starting with 735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.148928 4771 scope.go:117] "RemoveContainer" containerID="42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.161291 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15"} err="failed to get container status \"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15\": rpc error: code = NotFound desc = could not find container \"42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15\": container with ID starting with 42f34609e73db5fa0288ca0b4bf8ef86f2e3d01f0425f4111876637d3890ff15 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.161359 4771 scope.go:117] "RemoveContainer" containerID="735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.162331 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6677p\" (UniqueName: \"kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.166430 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832"} err="failed to get container status \"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832\": rpc error: code = NotFound desc = could not find container \"735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832\": container with ID starting with 735c14d76af478c18cb2796b2be0bae78609a248e90108038fcc8dd9817de832 not found: ID does not exist" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.189918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.235591 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236000 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236053 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236070 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236103 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236121 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvmsn\" (UniqueName: \"kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236200 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.236243 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.237082 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.237795 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.238728 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.240428 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.243047 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.244918 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.251291 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.272445 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvmsn\" (UniqueName: \"kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.287103 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.288961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.421109 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:51:46 crc kubenswrapper[4771]: I0123 13:51:46.768532 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5997f6f59b-xjrp4"] Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.293261 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fccea2-7bcf-4d30-a672-5590611ab0b1" path="/var/lib/kubelet/pods/25fccea2-7bcf-4d30-a672-5590611ab0b1/volumes" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.303875 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc4c99f-1a0f-4905-868c-a4c0a67cf034" path="/var/lib/kubelet/pods/4fc4c99f-1a0f-4905-868c-a4c0a67cf034/volumes" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.376801 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hbkvm" event={"ID":"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515","Type":"ContainerDied","Data":"d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81"} Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.376850 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.397595 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5997f6f59b-xjrp4" event={"ID":"252f0b58-bb25-4d24-98a2-22cde8bb2daf","Type":"ContainerStarted","Data":"262f9659dd53fdb90aad12128984f0b161a44bec8fa94707517ed0dc1f75b7a6"} Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.401390 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv46l\" (UniqueName: \"kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526578 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526631 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526747 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526772 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.526831 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys\") pod \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\" (UID: \"d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515\") " Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.530932 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.534671 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.540297 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l" (OuterVolumeSpecName: "kube-api-access-dv46l") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "kube-api-access-dv46l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.547926 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.553598 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts" (OuterVolumeSpecName: "scripts") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.584827 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.592565 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data" (OuterVolumeSpecName: "config-data") pod "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" (UID: "d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631121 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631160 4771 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631170 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631179 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv46l\" (UniqueName: \"kubernetes.io/projected/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-kube-api-access-dv46l\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631192 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.631201 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.769015 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.814887 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c75694975-s585q"] Jan 23 13:51:47 crc kubenswrapper[4771]: E0123 13:51:47.815910 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" containerName="keystone-bootstrap" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.815946 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" containerName="keystone-bootstrap" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.816205 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" containerName="keystone-bootstrap" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.817228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.833109 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.833482 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.877052 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c75694975-s585q"] Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-credential-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938293 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-fernet-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938316 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-config-data\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938445 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-public-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938502 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgqtx\" (UniqueName: \"kubernetes.io/projected/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-kube-api-access-zgqtx\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938825 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-scripts\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938856 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-combined-ca-bundle\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:47 crc kubenswrapper[4771]: I0123 13:51:47.938887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-internal-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.040799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-scripts\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.040844 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-combined-ca-bundle\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.040870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-internal-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.043687 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-credential-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.043722 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-fernet-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.043741 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-config-data\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.043796 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-public-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.043836 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgqtx\" (UniqueName: \"kubernetes.io/projected/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-kube-api-access-zgqtx\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.053352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-credential-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.059186 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-fernet-keys\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.061095 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-config-data\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.063770 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-internal-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.065648 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-scripts\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.067246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-public-tls-certs\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.102403 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-combined-ca-bundle\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.105815 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgqtx\" (UniqueName: \"kubernetes.io/projected/6860d79b-06bf-4ca1-b0a1-2d05a7b594c0-kube-api-access-zgqtx\") pod \"keystone-c75694975-s585q\" (UID: \"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0\") " pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.169683 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.484803 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-42qfl" event={"ID":"13f63357-c0a0-49eb-9011-bd32c84f414a","Type":"ContainerStarted","Data":"5b362fa2ace7a4e0d64395ac447be66d9a4f4db474d562bad46dae327de84513"} Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.501045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5997f6f59b-xjrp4" event={"ID":"252f0b58-bb25-4d24-98a2-22cde8bb2daf","Type":"ContainerStarted","Data":"5c25eea491717e465fb2d3f7feea3794f39ee0e4a5826df670c0c4b0b16a69d7"} Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.501130 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5997f6f59b-xjrp4" event={"ID":"252f0b58-bb25-4d24-98a2-22cde8bb2daf","Type":"ContainerStarted","Data":"826904c2888d0fe5cb2ce7a8157898007e3c3caee20ec3f4370b56baff145aac"} Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.502830 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.502879 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.511473 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.511528 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.515823 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerStarted","Data":"1dd0c4604533e99ba7ce4ab91d0d88b53721208932366ba691a0ae8362c412e6"} Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.523074 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-42qfl" podStartSLOduration=4.625733023 podStartE2EDuration="53.523045555s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="2026-01-23 13:50:57.60438737 +0000 UTC m=+1098.626924995" lastFinishedPulling="2026-01-23 13:51:46.501699912 +0000 UTC m=+1147.524237527" observedRunningTime="2026-01-23 13:51:48.515653872 +0000 UTC m=+1149.538191497" watchObservedRunningTime="2026-01-23 13:51:48.523045555 +0000 UTC m=+1149.545583180" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.547847 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hbkvm" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.550591 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerStarted","Data":"deb1395d6cdbf733141ce5fda30d88f3c2c78b3444269cc1ce059e2b14ac2180"} Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.559838 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5997f6f59b-xjrp4" podStartSLOduration=3.559742078 podStartE2EDuration="3.559742078s" podCreationTimestamp="2026-01-23 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:48.546796818 +0000 UTC m=+1149.569334443" watchObservedRunningTime="2026-01-23 13:51:48.559742078 +0000 UTC m=+1149.582279703" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.614292 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.614397 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.635642 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.668839 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.697195 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 23 13:51:48 crc kubenswrapper[4771]: E0123 13:51:48.920159 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8fe2dfb_8c93_4c82_bbc8_b24a3b6c6515.slice/crio-d919abd63a67fcb20ab644ebe0b34542f43aab83bf810c89104063e9c6307e81\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8fe2dfb_8c93_4c82_bbc8_b24a3b6c6515.slice\": RecentStats: unable to find data in memory cache]" Jan 23 13:51:48 crc kubenswrapper[4771]: I0123 13:51:48.975199 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c75694975-s585q"] Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.576159 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerStarted","Data":"02c8d4145423a8dc0a6b97fb6bb64412e61d5058cf7485df16a04a69c93c139f"} Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.594893 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c75694975-s585q" event={"ID":"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0","Type":"ContainerStarted","Data":"89ad860f6176814e245c5c5fef6f04edd77e02192dc3da290c3fc9927b67632a"} Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.594948 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c75694975-s585q" event={"ID":"6860d79b-06bf-4ca1-b0a1-2d05a7b594c0","Type":"ContainerStarted","Data":"011f712c3f932c7f6fd724efd236a07e9d96bd564d4e820625f3118390468d66"} Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.676328 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c75694975-s585q" podStartSLOduration=2.676262367 podStartE2EDuration="2.676262367s" podCreationTimestamp="2026-01-23 13:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:49.673577642 +0000 UTC m=+1150.696115277" watchObservedRunningTime="2026-01-23 13:51:49.676262367 +0000 UTC m=+1150.698799992" Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.761671 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:49 crc kubenswrapper[4771]: I0123 13:51:49.811744 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.019628 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.190286 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.190596 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75b9f85775-829n5" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="dnsmasq-dns" containerID="cri-o://2896a6cc4a2017672bbacac5fd6177688d60deaa40d0fcb76782c74ea1654e99" gracePeriod=10 Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.628547 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerStarted","Data":"33f68cc244fb43838ac5cb51c2052e354ead4c115dfb9bcee5098d66f9fd4411"} Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.683785 4771 generic.go:334] "Generic (PLEG): container finished" podID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerID="2896a6cc4a2017672bbacac5fd6177688d60deaa40d0fcb76782c74ea1654e99" exitCode=0 Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.683891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b9f85775-829n5" event={"ID":"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3","Type":"ContainerDied","Data":"2896a6cc4a2017672bbacac5fd6177688d60deaa40d0fcb76782c74ea1654e99"} Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.689709 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d7jd6" event={"ID":"506b2de1-f73d-4781-a52d-3f622c78660d","Type":"ContainerStarted","Data":"1250cdcdd562bec8790972936e00f99cde74755985fff387fe46d09a3f4d0f3e"} Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.689932 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c75694975-s585q" Jan 23 13:51:50 crc kubenswrapper[4771]: I0123 13:51:50.721153 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d7jd6" podStartSLOduration=5.057550169 podStartE2EDuration="55.721112657s" podCreationTimestamp="2026-01-23 13:50:55 +0000 UTC" firstStartedPulling="2026-01-23 13:50:57.002059826 +0000 UTC m=+1098.024597451" lastFinishedPulling="2026-01-23 13:51:47.665622324 +0000 UTC m=+1148.688159939" observedRunningTime="2026-01-23 13:51:50.705607276 +0000 UTC m=+1151.728144901" watchObservedRunningTime="2026-01-23 13:51:50.721112657 +0000 UTC m=+1151.743650302" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.223562 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.304591 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.305055 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.305253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.305353 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.305517 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqfst\" (UniqueName: \"kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.305674 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb\") pod \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\" (UID: \"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3\") " Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.323858 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst" (OuterVolumeSpecName: "kube-api-access-pqfst") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "kube-api-access-pqfst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.380431 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config" (OuterVolumeSpecName: "config") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.416060 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.420114 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.420162 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.420176 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqfst\" (UniqueName: \"kubernetes.io/projected/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-kube-api-access-pqfst\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.483292 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.495290 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.526305 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.526356 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.536763 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" (UID: "d40c1a7d-35c9-4fb5-8023-0c5a02e376e3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.631185 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.710336 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerStarted","Data":"a5ae9e2900d5d17c528f906a6b0ef104cf5f0c080873ec30ca11cd115b8eb912"} Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.717791 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75b9f85775-829n5" event={"ID":"d40c1a7d-35c9-4fb5-8023-0c5a02e376e3","Type":"ContainerDied","Data":"b9a0e22df596ea9adc9fb907a4243a936ec5137251f316f825948d002444e17c"} Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.717828 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75b9f85775-829n5" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.717889 4771 scope.go:117] "RemoveContainer" containerID="2896a6cc4a2017672bbacac5fd6177688d60deaa40d0fcb76782c74ea1654e99" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.776649 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerID="936ab2aa56733430a3b6235b328e932f1daa5e0c41b231ba40ea1373444cc2b5" exitCode=1 Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.776819 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerDied","Data":"936ab2aa56733430a3b6235b328e932f1daa5e0c41b231ba40ea1373444cc2b5"} Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.778242 4771 scope.go:117] "RemoveContainer" containerID="936ab2aa56733430a3b6235b328e932f1daa5e0c41b231ba40ea1373444cc2b5" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.788890 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.788859522 podStartE2EDuration="5.788859522s" podCreationTimestamp="2026-01-23 13:51:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:51.779170464 +0000 UTC m=+1152.801708099" watchObservedRunningTime="2026-01-23 13:51:51.788859522 +0000 UTC m=+1152.811397157" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.817651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerStarted","Data":"f2a04affa6a1bdccb7c160162a760998b648e41709e2b16b09195df8a4863b71"} Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.928843 4771 scope.go:117] "RemoveContainer" containerID="a56c58735bf9bf0b1f702868165f319b0ce6dbebcbd81d7280c0d966a22224af" Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.940463 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.971451 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75b9f85775-829n5"] Jan 23 13:51:51 crc kubenswrapper[4771]: I0123 13:51:51.992968 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.992939067 podStartE2EDuration="6.992939067s" podCreationTimestamp="2026-01-23 13:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:51:51.8983268 +0000 UTC m=+1152.920864415" watchObservedRunningTime="2026-01-23 13:51:51.992939067 +0000 UTC m=+1153.015476692" Jan 23 13:51:52 crc kubenswrapper[4771]: I0123 13:51:52.833783 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerStarted","Data":"ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c"} Jan 23 13:51:52 crc kubenswrapper[4771]: I0123 13:51:52.987916 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:52 crc kubenswrapper[4771]: I0123 13:51:52.988484 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" containerID="cri-o://5fb33c8981a46ccd45ccd00cb9b7ad5109b5313c22819c787e670c92b9595899" gracePeriod=30 Jan 23 13:51:52 crc kubenswrapper[4771]: I0123 13:51:52.988723 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" containerID="cri-o://b0c4d637ac8b9f1b7f2d46bac75336fe7d95abe8ebe24519f440d64402df3d74" gracePeriod=30 Jan 23 13:51:53 crc kubenswrapper[4771]: I0123 13:51:53.243643 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" path="/var/lib/kubelet/pods/d40c1a7d-35c9-4fb5-8023-0c5a02e376e3/volumes" Jan 23 13:51:54 crc kubenswrapper[4771]: I0123 13:51:54.511788 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Jan 23 13:51:54 crc kubenswrapper[4771]: I0123 13:51:54.613114 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57cbdcc8d-5lcfn" podUID="dd12560a-7353-492b-8037-822d7aceb4e0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.168:8443: connect: connection refused" Jan 23 13:51:54 crc kubenswrapper[4771]: I0123 13:51:54.867953 4771 generic.go:334] "Generic (PLEG): container finished" podID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerID="5fb33c8981a46ccd45ccd00cb9b7ad5109b5313c22819c787e670c92b9595899" exitCode=143 Jan 23 13:51:54 crc kubenswrapper[4771]: I0123 13:51:54.867992 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerDied","Data":"5fb33c8981a46ccd45ccd00cb9b7ad5109b5313c22819c787e670c92b9595899"} Jan 23 13:51:55 crc kubenswrapper[4771]: I0123 13:51:55.123981 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:54688->10.217.0.171:9322: read: connection reset by peer" Jan 23 13:51:55 crc kubenswrapper[4771]: I0123 13:51:55.124067 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.171:9322/\": read tcp 10.217.0.2:54692->10.217.0.171:9322: read: connection reset by peer" Jan 23 13:51:55 crc kubenswrapper[4771]: I0123 13:51:55.890833 4771 generic.go:334] "Generic (PLEG): container finished" podID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerID="b0c4d637ac8b9f1b7f2d46bac75336fe7d95abe8ebe24519f440d64402df3d74" exitCode=0 Jan 23 13:51:55 crc kubenswrapper[4771]: I0123 13:51:55.890896 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerDied","Data":"b0c4d637ac8b9f1b7f2d46bac75336fe7d95abe8ebe24519f440d64402df3d74"} Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.287876 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.287949 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.347352 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.385848 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.421733 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.421802 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.538751 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.675388 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.905655 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerID="ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c" exitCode=1 Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.907513 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerDied","Data":"ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c"} Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.907562 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.907580 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.907597 4771 scope.go:117] "RemoveContainer" containerID="936ab2aa56733430a3b6235b328e932f1daa5e0c41b231ba40ea1373444cc2b5" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.908338 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.908395 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 13:51:56 crc kubenswrapper[4771]: I0123 13:51:56.909203 4771 scope.go:117] "RemoveContainer" containerID="ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c" Jan 23 13:51:56 crc kubenswrapper[4771]: E0123 13:51:56.909490 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ebafbd30-6f52-4209-b962-c97da4d4f9da)\"" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.424127 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.517697 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle\") pod \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.517823 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs\") pod \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.517967 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca\") pod \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.518049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data\") pod \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.518138 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45g2w\" (UniqueName: \"kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w\") pod \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\" (UID: \"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe\") " Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.518356 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs" (OuterVolumeSpecName: "logs") pod "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" (UID: "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.519472 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.793785 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w" (OuterVolumeSpecName: "kube-api-access-45g2w") pod "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" (UID: "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe"). InnerVolumeSpecName "kube-api-access-45g2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.800062 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" (UID: "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.804069 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" (UID: "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.825337 4771 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.825373 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45g2w\" (UniqueName: \"kubernetes.io/projected/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-kube-api-access-45g2w\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.825386 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.845196 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data" (OuterVolumeSpecName: "config-data") pod "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" (UID: "ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.919619 4771 generic.go:334] "Generic (PLEG): container finished" podID="13f63357-c0a0-49eb-9011-bd32c84f414a" containerID="5b362fa2ace7a4e0d64395ac447be66d9a4f4db474d562bad46dae327de84513" exitCode=0 Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.919784 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-42qfl" event={"ID":"13f63357-c0a0-49eb-9011-bd32c84f414a","Type":"ContainerDied","Data":"5b362fa2ace7a4e0d64395ac447be66d9a4f4db474d562bad46dae327de84513"} Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.929068 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerStarted","Data":"2171b110e5f17015d272decfd5c3aac00e2162ba322460b8da870bae1b885cad"} Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.930309 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.940789 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.942889 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe","Type":"ContainerDied","Data":"4677c51e504d11f4dac31eca67e8893575ff123973cd6efe0aa1e06e23177951"} Jan 23 13:51:57 crc kubenswrapper[4771]: I0123 13:51:57.943016 4771 scope.go:117] "RemoveContainer" containerID="b0c4d637ac8b9f1b7f2d46bac75336fe7d95abe8ebe24519f440d64402df3d74" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.028567 4771 scope.go:117] "RemoveContainer" containerID="5fb33c8981a46ccd45ccd00cb9b7ad5109b5313c22819c787e670c92b9595899" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.029140 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.062037 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.141576 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:58 crc kubenswrapper[4771]: E0123 13:51:58.144040 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144074 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" Jan 23 13:51:58 crc kubenswrapper[4771]: E0123 13:51:58.144103 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="init" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144111 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="init" Jan 23 13:51:58 crc kubenswrapper[4771]: E0123 13:51:58.144125 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144132 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" Jan 23 13:51:58 crc kubenswrapper[4771]: E0123 13:51:58.144153 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="dnsmasq-dns" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144162 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="dnsmasq-dns" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144798 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144846 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40c1a7d-35c9-4fb5-8023-0c5a02e376e3" containerName="dnsmasq-dns" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.144873 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" containerName="watcher-api-log" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.147646 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.153020 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.153640 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.159024 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.170897 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.342726 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.343161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-logs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.343535 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-config-data\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.343814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vl6\" (UniqueName: \"kubernetes.io/projected/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-kube-api-access-t2vl6\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.343996 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.344035 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.344255 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446279 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446334 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446386 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446487 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446527 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-logs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446555 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-config-data\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.446592 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vl6\" (UniqueName: \"kubernetes.io/projected/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-kube-api-access-t2vl6\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.447561 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-logs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.453206 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.453556 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-config-data\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.453666 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.455070 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.455927 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.474033 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vl6\" (UniqueName: \"kubernetes.io/projected/588014e9-5ed0-4dfc-862e-ccafe84d7c3c-kube-api-access-t2vl6\") pod \"watcher-api-0\" (UID: \"588014e9-5ed0-4dfc-862e-ccafe84d7c3c\") " pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.484071 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.509327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.509396 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.511353 4771 scope.go:117] "RemoveContainer" containerID="ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c" Jan 23 13:51:58 crc kubenswrapper[4771]: E0123 13:51:58.511630 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ebafbd30-6f52-4209-b962-c97da4d4f9da)\"" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.952570 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:51:58 crc kubenswrapper[4771]: I0123 13:51:58.952953 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.042787 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.246819 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe" path="/var/lib/kubelet/pods/ce5e3a5c-2f83-4174-95bb-e5cb91b8acfe/volumes" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.531497 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-42qfl" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.677636 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data\") pod \"13f63357-c0a0-49eb-9011-bd32c84f414a\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.677856 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle\") pod \"13f63357-c0a0-49eb-9011-bd32c84f414a\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.678066 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdq2b\" (UniqueName: \"kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b\") pod \"13f63357-c0a0-49eb-9011-bd32c84f414a\" (UID: \"13f63357-c0a0-49eb-9011-bd32c84f414a\") " Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.683425 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "13f63357-c0a0-49eb-9011-bd32c84f414a" (UID: "13f63357-c0a0-49eb-9011-bd32c84f414a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.683653 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b" (OuterVolumeSpecName: "kube-api-access-qdq2b") pod "13f63357-c0a0-49eb-9011-bd32c84f414a" (UID: "13f63357-c0a0-49eb-9011-bd32c84f414a"). InnerVolumeSpecName "kube-api-access-qdq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.718325 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13f63357-c0a0-49eb-9011-bd32c84f414a" (UID: "13f63357-c0a0-49eb-9011-bd32c84f414a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.781129 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.781169 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdq2b\" (UniqueName: \"kubernetes.io/projected/13f63357-c0a0-49eb-9011-bd32c84f414a-kube-api-access-qdq2b\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.781185 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/13f63357-c0a0-49eb-9011-bd32c84f414a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.988978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-42qfl" event={"ID":"13f63357-c0a0-49eb-9011-bd32c84f414a","Type":"ContainerDied","Data":"c78fbb961c88ce0024894b061b2ba68781596e8404afd0107e35705ce36bcdc9"} Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.989375 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78fbb961c88ce0024894b061b2ba68781596e8404afd0107e35705ce36bcdc9" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.989452 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-42qfl" Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.993495 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"588014e9-5ed0-4dfc-862e-ccafe84d7c3c","Type":"ContainerStarted","Data":"cff29dd83d6f4e5dc2f9765055f879425db3afa6400ca20c70ec8fbc9b3c9494"} Jan 23 13:51:59 crc kubenswrapper[4771]: I0123 13:51:59.993560 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"588014e9-5ed0-4dfc-862e-ccafe84d7c3c","Type":"ContainerStarted","Data":"2af9705cbc35973dbffc219dd80bccad765be40fd16b1801b9211300c97b9b05"} Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.370570 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5db75f46cc-8gg5z"] Jan 23 13:52:00 crc kubenswrapper[4771]: E0123 13:52:00.371280 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" containerName="barbican-db-sync" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.371301 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" containerName="barbican-db-sync" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.371728 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" containerName="barbican-db-sync" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.373075 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.383815 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.383876 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7xxs6" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.384150 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.405007 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-58f475d4c8-2cpwk"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.418246 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.433451 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.440553 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5db75f46cc-8gg5z"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.469004 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58f475d4c8-2cpwk"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.514774 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data-custom\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.514942 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.514973 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-logs\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.515046 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-combined-ca-bundle\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.515168 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whg9\" (UniqueName: \"kubernetes.io/projected/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-kube-api-access-2whg9\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.617499 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.619967 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztd6q\" (UniqueName: \"kubernetes.io/projected/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-kube-api-access-ztd6q\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data-custom\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625801 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2298g\" (UniqueName: \"kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-combined-ca-bundle\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625858 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625890 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625911 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-logs\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625948 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-combined-ca-bundle\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.625965 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626010 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-logs\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626041 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626061 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626099 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whg9\" (UniqueName: \"kubernetes.io/projected/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-kube-api-access-2whg9\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626125 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data-custom\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.626152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.630168 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-logs\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.661606 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.662553 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-combined-ca-bundle\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.662944 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.675434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-config-data-custom\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.718448 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whg9\" (UniqueName: \"kubernetes.io/projected/6869d87c-129e-4f55-947d-b1dbcc1eb7fb-kube-api-access-2whg9\") pod \"barbican-worker-5db75f46cc-8gg5z\" (UID: \"6869d87c-129e-4f55-947d-b1dbcc1eb7fb\") " pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.725126 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5db75f46cc-8gg5z" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.728929 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2298g\" (UniqueName: \"kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-combined-ca-bundle\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729152 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729175 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-logs\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729201 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729225 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729282 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data-custom\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.729342 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztd6q\" (UniqueName: \"kubernetes.io/projected/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-kube-api-access-ztd6q\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.736887 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-logs\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.739690 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.740458 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.741133 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.751914 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.755456 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.759453 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.766108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-config-data-custom\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.772448 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-combined-ca-bundle\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.774484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2298g\" (UniqueName: \"kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g\") pod \"dnsmasq-dns-8647c8d887-wjkwf\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.774950 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztd6q\" (UniqueName: \"kubernetes.io/projected/b369de15-be5b-46dc-9a6a-5bd2cdca01a3-kube-api-access-ztd6q\") pod \"barbican-keystone-listener-58f475d4c8-2cpwk\" (UID: \"b369de15-be5b-46dc-9a6a-5bd2cdca01a3\") " pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.790006 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.886369 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.888228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.893575 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.921951 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 13:52:00 crc kubenswrapper[4771]: I0123 13:52:00.993637 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.007765 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"588014e9-5ed0-4dfc-862e-ccafe84d7c3c","Type":"ContainerStarted","Data":"53a543cb322805edb0520bbe53a59477e2d6a0e0d225dcb91f08fffdda1e7d3d"} Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.010360 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.012377 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="588014e9-5ed0-4dfc-862e-ccafe84d7c3c" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.183:9322/\": dial tcp 10.217.0.183:9322: connect: connection refused" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.039188 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.039634 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.039731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.039772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjkl\" (UniqueName: \"kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.039807 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.087146 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.087118546 podStartE2EDuration="3.087118546s" podCreationTimestamp="2026-01-23 13:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:01.074451295 +0000 UTC m=+1162.096988920" watchObservedRunningTime="2026-01-23 13:52:01.087118546 +0000 UTC m=+1162.109656171" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.142549 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.143632 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjkl\" (UniqueName: \"kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.143676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.143885 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.143905 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.144302 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.148506 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.152593 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.152716 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.154100 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.163182 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.178846 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.188466 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjkl\" (UniqueName: \"kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl\") pod \"barbican-api-798cb98666-gbkq6\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.216904 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.217423 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.259216 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:01 crc kubenswrapper[4771]: I0123 13:52:01.524392 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 13:52:02 crc kubenswrapper[4771]: I0123 13:52:02.006843 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-58f475d4c8-2cpwk"] Jan 23 13:52:02 crc kubenswrapper[4771]: I0123 13:52:02.041660 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5db75f46cc-8gg5z"] Jan 23 13:52:02 crc kubenswrapper[4771]: I0123 13:52:02.184198 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:02 crc kubenswrapper[4771]: I0123 13:52:02.337901 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.077292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" event={"ID":"b369de15-be5b-46dc-9a6a-5bd2cdca01a3","Type":"ContainerStarted","Data":"c22abe2c4bb01c8eb3a1fc7cdab15c1a3d805812b4f9936d2c8ba061304050e5"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.091682 4771 generic.go:334] "Generic (PLEG): container finished" podID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerID="d75a57f96158eeaae9343d51d46dc7130b9ff3961c348006eb0ca56947114878" exitCode=0 Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.091861 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" event={"ID":"8d0e6b33-af5f-449d-b51f-ba2725cedd3b","Type":"ContainerDied","Data":"d75a57f96158eeaae9343d51d46dc7130b9ff3961c348006eb0ca56947114878"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.091930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" event={"ID":"8d0e6b33-af5f-449d-b51f-ba2725cedd3b","Type":"ContainerStarted","Data":"3dc827da598433d9dc988046ab9f0c93864674ddce4d17af2fd872689d6cadb4"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.102227 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerStarted","Data":"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.102287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerStarted","Data":"661f5a8b8d6f11aa296ee063c5d3c611fd8f5815422507d0376325018f1cb9f7"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.108489 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5db75f46cc-8gg5z" event={"ID":"6869d87c-129e-4f55-947d-b1dbcc1eb7fb","Type":"ContainerStarted","Data":"6e821d2627d30e6b0195999cd79c0c1a7417cffbccd4a76a2204b86fc7f40cf3"} Jan 23 13:52:03 crc kubenswrapper[4771]: I0123 13:52:03.484703 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.141569 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" event={"ID":"8d0e6b33-af5f-449d-b51f-ba2725cedd3b","Type":"ContainerStarted","Data":"c865b2bf5cc942e6c3f26dd2c3f3ff7a02e5a9cf8b2e8cef84a88d32613d3754"} Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.142201 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.153766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerStarted","Data":"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949"} Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.154893 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.154937 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.162812 4771 generic.go:334] "Generic (PLEG): container finished" podID="506b2de1-f73d-4781-a52d-3f622c78660d" containerID="1250cdcdd562bec8790972936e00f99cde74755985fff387fe46d09a3f4d0f3e" exitCode=0 Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.162908 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.162922 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d7jd6" event={"ID":"506b2de1-f73d-4781-a52d-3f622c78660d","Type":"ContainerDied","Data":"1250cdcdd562bec8790972936e00f99cde74755985fff387fe46d09a3f4d0f3e"} Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.193076 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" podStartSLOduration=4.193050947 podStartE2EDuration="4.193050947s" podCreationTimestamp="2026-01-23 13:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:04.180106757 +0000 UTC m=+1165.202644392" watchObservedRunningTime="2026-01-23 13:52:04.193050947 +0000 UTC m=+1165.215588572" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.236398 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-798cb98666-gbkq6" podStartSLOduration=4.23636428 podStartE2EDuration="4.23636428s" podCreationTimestamp="2026-01-23 13:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:04.220373912 +0000 UTC m=+1165.242911537" watchObservedRunningTime="2026-01-23 13:52:04.23636428 +0000 UTC m=+1165.258901905" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.718473 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7648986c66-7zlgv"] Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.720450 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.724200 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.724544 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.740324 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7648986c66-7zlgv"] Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783756 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-public-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783867 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data-custom\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783923 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8jvn\" (UniqueName: \"kubernetes.io/projected/ee927340-158a-4961-a78f-c8ae1fae907f-kube-api-access-m8jvn\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783951 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee927340-158a-4961-a78f-c8ae1fae907f-logs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.783994 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-combined-ca-bundle\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.784045 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-internal-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.885879 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-internal-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.885963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-public-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.886018 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.886038 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data-custom\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.886074 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8jvn\" (UniqueName: \"kubernetes.io/projected/ee927340-158a-4961-a78f-c8ae1fae907f-kube-api-access-m8jvn\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.886092 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee927340-158a-4961-a78f-c8ae1fae907f-logs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.886134 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-combined-ca-bundle\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.891633 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee927340-158a-4961-a78f-c8ae1fae907f-logs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.903577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-public-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.904183 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.904523 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-combined-ca-bundle\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.914292 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-config-data-custom\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.914824 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee927340-158a-4961-a78f-c8ae1fae907f-internal-tls-certs\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:04 crc kubenswrapper[4771]: I0123 13:52:04.932663 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8jvn\" (UniqueName: \"kubernetes.io/projected/ee927340-158a-4961-a78f-c8ae1fae907f-kube-api-access-m8jvn\") pod \"barbican-api-7648986c66-7zlgv\" (UID: \"ee927340-158a-4961-a78f-c8ae1fae907f\") " pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.058512 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.327068 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.599191 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.600758 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-585dc95dd9-rtq4d" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-api" containerID="cri-o://10e59b1f8efabf7952b88b4340e3e30fd7d851847aa43c0d75748b5d89eb2677" gracePeriod=30 Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.603993 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-585dc95dd9-rtq4d" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" containerID="cri-o://7868613dd37b3a8d5277e8a37e8b3d59908a9becb555d7bc67322f4f77e0f548" gracePeriod=30 Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.631321 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-585dc95dd9-rtq4d" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.178:9696/\": read tcp 10.217.0.2:46304->10.217.0.178:9696: read: connection reset by peer" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.702442 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fd76d6849-9jhnn"] Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.707658 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.721090 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fd76d6849-9jhnn"] Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.807459 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821341 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-combined-ca-bundle\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821444 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-httpd-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-internal-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821550 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-public-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-ovndb-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821635 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.821713 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrff\" (UniqueName: \"kubernetes.io/projected/c97f9128-e90c-482e-9b39-0505b4195ced-kube-api-access-nhrff\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.923897 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.924840 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925037 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925172 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbqcx\" (UniqueName: \"kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925263 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925383 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id\") pod \"506b2de1-f73d-4781-a52d-3f622c78660d\" (UID: \"506b2de1-f73d-4781-a52d-3f622c78660d\") " Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-combined-ca-bundle\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.925963 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-httpd-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.926052 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-internal-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.926157 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-public-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.926245 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-ovndb-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.926318 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.926467 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhrff\" (UniqueName: \"kubernetes.io/projected/c97f9128-e90c-482e-9b39-0505b4195ced-kube-api-access-nhrff\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.934851 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.939832 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-ovndb-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.949395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-httpd-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.950005 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts" (OuterVolumeSpecName: "scripts") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.950947 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-combined-ca-bundle\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.956956 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-config\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.958666 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx" (OuterVolumeSpecName: "kube-api-access-xbqcx") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "kube-api-access-xbqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.961653 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-internal-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.961871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97f9128-e90c-482e-9b39-0505b4195ced-public-tls-certs\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.963049 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:05 crc kubenswrapper[4771]: I0123 13:52:05.972575 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhrff\" (UniqueName: \"kubernetes.io/projected/c97f9128-e90c-482e-9b39-0505b4195ced-kube-api-access-nhrff\") pod \"neutron-6fd76d6849-9jhnn\" (UID: \"c97f9128-e90c-482e-9b39-0505b4195ced\") " pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:05.999995 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.029027 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.029068 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbqcx\" (UniqueName: \"kubernetes.io/projected/506b2de1-f73d-4781-a52d-3f622c78660d-kube-api-access-xbqcx\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.029080 4771 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.029088 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/506b2de1-f73d-4781-a52d-3f622c78660d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.029097 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.046707 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data" (OuterVolumeSpecName: "config-data") pod "506b2de1-f73d-4781-a52d-3f622c78660d" (UID: "506b2de1-f73d-4781-a52d-3f622c78660d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.060183 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.131733 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/506b2de1-f73d-4781-a52d-3f622c78660d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.229612 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5db75f46cc-8gg5z" event={"ID":"6869d87c-129e-4f55-947d-b1dbcc1eb7fb","Type":"ContainerStarted","Data":"2926ce7a53118b9c5f293b612543847d0ced49a36e09430f4db3fc8aa6fb5ac4"} Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.240226 4771 generic.go:334] "Generic (PLEG): container finished" podID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerID="7868613dd37b3a8d5277e8a37e8b3d59908a9becb555d7bc67322f4f77e0f548" exitCode=0 Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.240303 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerDied","Data":"7868613dd37b3a8d5277e8a37e8b3d59908a9becb555d7bc67322f4f77e0f548"} Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.248972 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" event={"ID":"b369de15-be5b-46dc-9a6a-5bd2cdca01a3","Type":"ContainerStarted","Data":"b39e2206993c2dafdaa1eb887a0c9c7159ea4752765cfa923b474c8f7452ce81"} Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.260004 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d7jd6" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.264429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d7jd6" event={"ID":"506b2de1-f73d-4781-a52d-3f622c78660d","Type":"ContainerDied","Data":"e9678ac423d72f443a637ed293b47d585b6c1fd768e5a514efda8f6f02ee499d"} Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.264508 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9678ac423d72f443a637ed293b47d585b6c1fd768e5a514efda8f6f02ee499d" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.268463 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7648986c66-7zlgv"] Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.509926 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:06 crc kubenswrapper[4771]: E0123 13:52:06.510476 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" containerName="cinder-db-sync" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.510491 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" containerName="cinder-db-sync" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.510684 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" containerName="cinder-db-sync" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.514054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.517112 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ns665" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.517606 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.517779 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.529764 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.546817 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.546884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.547060 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56tf\" (UniqueName: \"kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.547133 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.547344 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.547529 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.561983 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.663709 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56tf\" (UniqueName: \"kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.664235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.664506 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.664701 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.664866 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.664904 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.674779 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.760553 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.765004 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.778894 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.779515 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="dnsmasq-dns" containerID="cri-o://c865b2bf5cc942e6c3f26dd2c3f3ff7a02e5a9cf8b2e8cef84a88d32613d3754" gracePeriod=10 Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.803201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.807317 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56tf\" (UniqueName: \"kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.809695 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.889305 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.900349 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.902586 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.928123 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.998874 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.998985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.999042 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.999064 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.999096 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv477\" (UniqueName: \"kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:06 crc kubenswrapper[4771]: I0123 13:52:06.999139 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.044749 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="588014e9-5ed0-4dfc-862e-ccafe84d7c3c" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.183:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.102932 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.103005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.103032 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.103062 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv477\" (UniqueName: \"kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.103109 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.103151 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.104187 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.105338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.110106 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.111702 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.112652 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.159347 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv477\" (UniqueName: \"kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477\") pod \"dnsmasq-dns-67fc64bcc5-zfnkn\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.207073 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.210011 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.219269 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.259940 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.278328 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7648986c66-7zlgv" event={"ID":"ee927340-158a-4961-a78f-c8ae1fae907f","Type":"ContainerStarted","Data":"6d20683ca3190ba23b39e55da1bec3e24e069f1f442c280631c59345b6d4b82a"} Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.329973 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" event={"ID":"b369de15-be5b-46dc-9a6a-5bd2cdca01a3","Type":"ContainerStarted","Data":"3d29f45a2ae7186122e85a301abdc3b3746013bc17c8be89acc326ea8263ec17"} Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.356487 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fd76d6849-9jhnn"] Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411052 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz97q\" (UniqueName: \"kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411144 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411211 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411341 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411430 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411458 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.411483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.444560 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-58f475d4c8-2cpwk" podStartSLOduration=4.136446545 podStartE2EDuration="7.44453285s" podCreationTimestamp="2026-01-23 13:52:00 +0000 UTC" firstStartedPulling="2026-01-23 13:52:02.026435362 +0000 UTC m=+1163.048972987" lastFinishedPulling="2026-01-23 13:52:05.334521667 +0000 UTC m=+1166.357059292" observedRunningTime="2026-01-23 13:52:07.424128873 +0000 UTC m=+1168.446666508" watchObservedRunningTime="2026-01-23 13:52:07.44453285 +0000 UTC m=+1168.467070475" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.518604 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.519635 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.519892 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.520083 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.520183 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.520259 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.520362 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz97q\" (UniqueName: \"kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.527748 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.534473 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.535934 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.543066 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.557008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.574035 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz97q\" (UniqueName: \"kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.574297 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom\") pod \"cinder-api-0\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " pod="openstack/cinder-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.841099 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.939111 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:07 crc kubenswrapper[4771]: I0123 13:52:07.987245 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.361563 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd76d6849-9jhnn" event={"ID":"c97f9128-e90c-482e-9b39-0505b4195ced","Type":"ContainerStarted","Data":"e1ac9826c8fb21f2339201a81327806a3016d219e1d51a34c3b86e2eda47a7fa"} Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.366027 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7648986c66-7zlgv" event={"ID":"ee927340-158a-4961-a78f-c8ae1fae907f","Type":"ContainerStarted","Data":"5603d57f9dd0c78a48098d219bc8607e7a56ac0adc09363213c6f055b6900b9b"} Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.382081 4771 generic.go:334] "Generic (PLEG): container finished" podID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerID="c865b2bf5cc942e6c3f26dd2c3f3ff7a02e5a9cf8b2e8cef84a88d32613d3754" exitCode=0 Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.382191 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" event={"ID":"8d0e6b33-af5f-449d-b51f-ba2725cedd3b","Type":"ContainerDied","Data":"c865b2bf5cc942e6c3f26dd2c3f3ff7a02e5a9cf8b2e8cef84a88d32613d3754"} Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.382225 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" event={"ID":"8d0e6b33-af5f-449d-b51f-ba2725cedd3b","Type":"ContainerDied","Data":"3dc827da598433d9dc988046ab9f0c93864674ddce4d17af2fd872689d6cadb4"} Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.382237 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc827da598433d9dc988046ab9f0c93864674ddce4d17af2fd872689d6cadb4" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.407503 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5db75f46cc-8gg5z" event={"ID":"6869d87c-129e-4f55-947d-b1dbcc1eb7fb","Type":"ContainerStarted","Data":"7f85f2695130e5f23e51b5c967e3d39b39d92573bff3df44ebc467d7e9adf097"} Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.450148 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5db75f46cc-8gg5z" podStartSLOduration=5.179555928 podStartE2EDuration="8.450105764s" podCreationTimestamp="2026-01-23 13:52:00 +0000 UTC" firstStartedPulling="2026-01-23 13:52:02.049314087 +0000 UTC m=+1163.071851712" lastFinishedPulling="2026-01-23 13:52:05.319863923 +0000 UTC m=+1166.342401548" observedRunningTime="2026-01-23 13:52:08.435053237 +0000 UTC m=+1169.457590862" watchObservedRunningTime="2026-01-23 13:52:08.450105764 +0000 UTC m=+1169.472643389" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.485677 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.486088 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/watcher-api-0" podUID="588014e9-5ed0-4dfc-862e-ccafe84d7c3c" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.183:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.548796 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.579370 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.607834 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.659299 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.811977 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.812299 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.812339 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2298g\" (UniqueName: \"kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.812444 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.812476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.812499 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config\") pod \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\" (UID: \"8d0e6b33-af5f-449d-b51f-ba2725cedd3b\") " Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.851693 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g" (OuterVolumeSpecName: "kube-api-access-2298g") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "kube-api-access-2298g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.917838 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2298g\" (UniqueName: \"kubernetes.io/projected/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-kube-api-access-2298g\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:08 crc kubenswrapper[4771]: I0123 13:52:08.942845 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.151859 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.166140 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-585dc95dd9-rtq4d" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.178:9696/\": dial tcp 10.217.0.178:9696: connect: connection refused" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.193340 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.265351 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.456765 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerStarted","Data":"05f14459054d48ae4107866d5d70b0859ed997c07e80898562f4bc3ad859c4aa"} Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.480812 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" event={"ID":"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1","Type":"ContainerStarted","Data":"d7aa5ebdbad8e0ca698395dea62f86618c05bb15828d0ff108b7cdefa86a1dbf"} Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.507761 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647c8d887-wjkwf" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.507736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd76d6849-9jhnn" event={"ID":"c97f9128-e90c-482e-9b39-0505b4195ced","Type":"ContainerStarted","Data":"f7a98ca8cf8f3e03046994ee7b137fedf06f9ac9507dc0e866173df46dbe8736"} Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.509626 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.533919 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.566254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config" (OuterVolumeSpecName: "config") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.569590 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.587878 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.588333 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.612234 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-57cbdcc8d-5lcfn" podUID="dd12560a-7353-492b-8037-822d7aceb4e0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.168:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.622496 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.622746 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d0e6b33-af5f-449d-b51f-ba2725cedd3b" (UID: "8d0e6b33-af5f-449d-b51f-ba2725cedd3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.690448 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.690888 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d0e6b33-af5f-449d-b51f-ba2725cedd3b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.952921 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:09 crc kubenswrapper[4771]: I0123 13:52:09.976300 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8647c8d887-wjkwf"] Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.544559 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerStarted","Data":"3a0ca57497a339be8decb6291b0f4b56916bdd120a28f37398a9b649247db287"} Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.553972 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd76d6849-9jhnn" event={"ID":"c97f9128-e90c-482e-9b39-0505b4195ced","Type":"ContainerStarted","Data":"5b6d6c2ebae95a78124e1e93c1bc940e47611da07dd85f4d4712a5852fda33d5"} Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.555514 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.568464 4771 generic.go:334] "Generic (PLEG): container finished" podID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerID="61e5669029c699331a5f8fd4c9488128ce993e748e2f0d4ca0ea661743f23f2b" exitCode=0 Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.568542 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" event={"ID":"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1","Type":"ContainerDied","Data":"61e5669029c699331a5f8fd4c9488128ce993e748e2f0d4ca0ea661743f23f2b"} Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.578806 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fd76d6849-9jhnn" podStartSLOduration=5.578785767 podStartE2EDuration="5.578785767s" podCreationTimestamp="2026-01-23 13:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:10.576732832 +0000 UTC m=+1171.599270457" watchObservedRunningTime="2026-01-23 13:52:10.578785767 +0000 UTC m=+1171.601323402" Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.588648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7648986c66-7zlgv" event={"ID":"ee927340-158a-4961-a78f-c8ae1fae907f","Type":"ContainerStarted","Data":"e7d8338952cf171cf39fa243c914e19ae6c0249d6f65b163faf262cd21658fd1"} Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.588700 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.588725 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:10 crc kubenswrapper[4771]: I0123 13:52:10.625731 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7648986c66-7zlgv" podStartSLOduration=6.625704793 podStartE2EDuration="6.625704793s" podCreationTimestamp="2026-01-23 13:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:10.623909727 +0000 UTC m=+1171.646447362" watchObservedRunningTime="2026-01-23 13:52:10.625704793 +0000 UTC m=+1171.648242418" Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.005322 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.245336 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" path="/var/lib/kubelet/pods/8d0e6b33-af5f-449d-b51f-ba2725cedd3b/volumes" Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.612652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerStarted","Data":"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce"} Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.622111 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerStarted","Data":"86029f1d13655df0a06639549de5478e31c78d6ae13d0a7241c1830af3f28f4d"} Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.627906 4771 generic.go:334] "Generic (PLEG): container finished" podID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerID="10e59b1f8efabf7952b88b4340e3e30fd7d851847aa43c0d75748b5d89eb2677" exitCode=0 Jan 23 13:52:11 crc kubenswrapper[4771]: I0123 13:52:11.629014 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerDied","Data":"10e59b1f8efabf7952b88b4340e3e30fd7d851847aa43c0d75748b5d89eb2677"} Jan 23 13:52:12 crc kubenswrapper[4771]: I0123 13:52:12.229069 4771 scope.go:117] "RemoveContainer" containerID="ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c" Jan 23 13:52:12 crc kubenswrapper[4771]: I0123 13:52:12.949550 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:13 crc kubenswrapper[4771]: I0123 13:52:13.224563 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:16 crc kubenswrapper[4771]: I0123 13:52:16.862986 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:16 crc kubenswrapper[4771]: I0123 13:52:16.870673 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7648986c66-7zlgv" Jan 23 13:52:16 crc kubenswrapper[4771]: I0123 13:52:16.966378 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:16 crc kubenswrapper[4771]: I0123 13:52:16.966714 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798cb98666-gbkq6" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api-log" containerID="cri-o://a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f" gracePeriod=30 Jan 23 13:52:16 crc kubenswrapper[4771]: I0123 13:52:16.967189 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-798cb98666-gbkq6" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api" containerID="cri-o://236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949" gracePeriod=30 Jan 23 13:52:17 crc kubenswrapper[4771]: I0123 13:52:17.653666 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:52:17 crc kubenswrapper[4771]: I0123 13:52:17.787019 4771 generic.go:334] "Generic (PLEG): container finished" podID="c55614ed-18f8-4dab-a774-c161ab25107a" containerID="a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f" exitCode=143 Jan 23 13:52:17 crc kubenswrapper[4771]: I0123 13:52:17.788534 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerDied","Data":"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f"} Jan 23 13:52:18 crc kubenswrapper[4771]: I0123 13:52:18.217134 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:52:18 crc kubenswrapper[4771]: I0123 13:52:18.506699 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:18 crc kubenswrapper[4771]: I0123 13:52:18.506746 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:18 crc kubenswrapper[4771]: I0123 13:52:18.982023 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:52:19 crc kubenswrapper[4771]: I0123 13:52:19.007768 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5997f6f59b-xjrp4" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.145730 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:52:20 crc kubenswrapper[4771]: E0123 13:52:20.284461 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 23 13:52:20 crc kubenswrapper[4771]: E0123 13:52:20.285179 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bc74c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(93183170-d32d-4633-a9b5-5740232e4da4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 13:52:20 crc kubenswrapper[4771]: E0123 13:52:20.287079 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="93183170-d32d-4633-a9b5-5740232e4da4" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.760837 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.864315 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs\") pod \"c55614ed-18f8-4dab-a774-c161ab25107a\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.864455 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data\") pod \"c55614ed-18f8-4dab-a774-c161ab25107a\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.864489 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpjkl\" (UniqueName: \"kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl\") pod \"c55614ed-18f8-4dab-a774-c161ab25107a\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.866585 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs" (OuterVolumeSpecName: "logs") pod "c55614ed-18f8-4dab-a774-c161ab25107a" (UID: "c55614ed-18f8-4dab-a774-c161ab25107a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.876648 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl" (OuterVolumeSpecName: "kube-api-access-vpjkl") pod "c55614ed-18f8-4dab-a774-c161ab25107a" (UID: "c55614ed-18f8-4dab-a774-c161ab25107a"). InnerVolumeSpecName "kube-api-access-vpjkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.877226 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" event={"ID":"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1","Type":"ContainerStarted","Data":"33f34933169f5dffb367a4fee829842938952249f48bab7f7aa6d8e735d9b343"} Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.877313 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.897420 4771 generic.go:334] "Generic (PLEG): container finished" podID="c55614ed-18f8-4dab-a774-c161ab25107a" containerID="236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949" exitCode=0 Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.897498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerDied","Data":"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949"} Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.897546 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-798cb98666-gbkq6" event={"ID":"c55614ed-18f8-4dab-a774-c161ab25107a","Type":"ContainerDied","Data":"661f5a8b8d6f11aa296ee063c5d3c611fd8f5815422507d0376325018f1cb9f7"} Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.897571 4771 scope.go:117] "RemoveContainer" containerID="236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.898008 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.914860 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-central-agent" containerID="cri-o://b000fac6131a392f545af2ebb68e9ce0cf352e051176ce409883d5161c6a4615" gracePeriod=30 Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.915509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerStarted","Data":"85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c"} Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.915594 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-notification-agent" containerID="cri-o://8762cfdc1fa334b043c97625e1fc97b183fd3bfeb09d437515b21cff0c5aa955" gracePeriod=30 Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.915605 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="sg-core" containerID="cri-o://2171b110e5f17015d272decfd5c3aac00e2162ba322460b8da870bae1b885cad" gracePeriod=30 Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.919396 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" podStartSLOduration=14.919384009 podStartE2EDuration="14.919384009s" podCreationTimestamp="2026-01-23 13:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:20.909076533 +0000 UTC m=+1181.931614168" watchObservedRunningTime="2026-01-23 13:52:20.919384009 +0000 UTC m=+1181.941921634" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.967062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom\") pod \"c55614ed-18f8-4dab-a774-c161ab25107a\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.967164 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle\") pod \"c55614ed-18f8-4dab-a774-c161ab25107a\" (UID: \"c55614ed-18f8-4dab-a774-c161ab25107a\") " Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.967881 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c55614ed-18f8-4dab-a774-c161ab25107a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.967908 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpjkl\" (UniqueName: \"kubernetes.io/projected/c55614ed-18f8-4dab-a774-c161ab25107a-kube-api-access-vpjkl\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.973057 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c55614ed-18f8-4dab-a774-c161ab25107a" (UID: "c55614ed-18f8-4dab-a774-c161ab25107a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.973356 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data" (OuterVolumeSpecName: "config-data") pod "c55614ed-18f8-4dab-a774-c161ab25107a" (UID: "c55614ed-18f8-4dab-a774-c161ab25107a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:20 crc kubenswrapper[4771]: I0123 13:52:20.997496 4771 scope.go:117] "RemoveContainer" containerID="a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.006265 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c55614ed-18f8-4dab-a774-c161ab25107a" (UID: "c55614ed-18f8-4dab-a774-c161ab25107a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.033144 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.069919 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.069956 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.069968 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55614ed-18f8-4dab-a774-c161ab25107a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.107173 4771 scope.go:117] "RemoveContainer" containerID="236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949" Jan 23 13:52:21 crc kubenswrapper[4771]: E0123 13:52:21.108972 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949\": container with ID starting with 236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949 not found: ID does not exist" containerID="236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.109034 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949"} err="failed to get container status \"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949\": rpc error: code = NotFound desc = could not find container \"236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949\": container with ID starting with 236d620e668a366a39ebf31d1dbf3e59aece6c84a0c2c1289ae7faf44f13a949 not found: ID does not exist" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.109072 4771 scope.go:117] "RemoveContainer" containerID="a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f" Jan 23 13:52:21 crc kubenswrapper[4771]: E0123 13:52:21.112501 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f\": container with ID starting with a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f not found: ID does not exist" containerID="a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.112541 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f"} err="failed to get container status \"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f\": rpc error: code = NotFound desc = could not find container \"a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f\": container with ID starting with a6f74243f05175dbe8ed6587367642553869429c7aecd30d6b5155a316d7505f not found: ID does not exist" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.171847 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.171907 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp52f\" (UniqueName: \"kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.172047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.172302 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.172349 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.172386 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.172423 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs\") pod \"bd25f0ad-f4d3-4333-8803-cc30734719f9\" (UID: \"bd25f0ad-f4d3-4333-8803-cc30734719f9\") " Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.187955 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f" (OuterVolumeSpecName: "kube-api-access-cp52f") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "kube-api-access-cp52f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.195845 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.275881 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config" (OuterVolumeSpecName: "config") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.279360 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp52f\" (UniqueName: \"kubernetes.io/projected/bd25f0ad-f4d3-4333-8803-cc30734719f9-kube-api-access-cp52f\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.279699 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.280646 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.324829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.340334 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.362781 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.383607 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.384113 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.384125 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.389486 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bd25f0ad-f4d3-4333-8803-cc30734719f9" (UID: "bd25f0ad-f4d3-4333-8803-cc30734719f9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.467499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-57cbdcc8d-5lcfn" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.487383 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd25f0ad-f4d3-4333-8803-cc30734719f9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.563014 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.563811 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" containerID="cri-o://c317b37309f1dd8f35ba92e9d8dfde672279d895829ca97dce9a0fbfdaa0aa69" gracePeriod=30 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.563299 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon-log" containerID="cri-o://2afd31066240e94aa240c0d85614a362530e127adcbc5ed5dbe9b1eaade7ebfd" gracePeriod=30 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.945135 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerStarted","Data":"9814d7c6b6d61cbea60452dd6badd1b2a63010fe46f19d48224ba55d405e6481"} Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.945891 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api-log" containerID="cri-o://86029f1d13655df0a06639549de5478e31c78d6ae13d0a7241c1830af3f28f4d" gracePeriod=30 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.946034 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.946065 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" containerID="cri-o://9814d7c6b6d61cbea60452dd6badd1b2a63010fe46f19d48224ba55d405e6481" gracePeriod=30 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.979785 4771 generic.go:334] "Generic (PLEG): container finished" podID="93183170-d32d-4633-a9b5-5740232e4da4" containerID="2171b110e5f17015d272decfd5c3aac00e2162ba322460b8da870bae1b885cad" exitCode=2 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.979854 4771 generic.go:334] "Generic (PLEG): container finished" podID="93183170-d32d-4633-a9b5-5740232e4da4" containerID="b000fac6131a392f545af2ebb68e9ce0cf352e051176ce409883d5161c6a4615" exitCode=0 Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.980010 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerDied","Data":"2171b110e5f17015d272decfd5c3aac00e2162ba322460b8da870bae1b885cad"} Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.980059 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerDied","Data":"b000fac6131a392f545af2ebb68e9ce0cf352e051176ce409883d5161c6a4615"} Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.988819 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-585dc95dd9-rtq4d" event={"ID":"bd25f0ad-f4d3-4333-8803-cc30734719f9","Type":"ContainerDied","Data":"792bf2c3f27288a7be3ed2506cd6b611c6aa43c5f6f34e75c5f054b8541585e0"} Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.988876 4771 scope.go:117] "RemoveContainer" containerID="7868613dd37b3a8d5277e8a37e8b3d59908a9becb555d7bc67322f4f77e0f548" Jan 23 13:52:21 crc kubenswrapper[4771]: I0123 13:52:21.989092 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-585dc95dd9-rtq4d" Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.015817 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerStarted","Data":"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba"} Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.032810 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=15.032785221 podStartE2EDuration="15.032785221s" podCreationTimestamp="2026-01-23 13:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:21.963807905 +0000 UTC m=+1182.986345550" watchObservedRunningTime="2026-01-23 13:52:22.032785221 +0000 UTC m=+1183.055322846" Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.072619 4771 scope.go:117] "RemoveContainer" containerID="10e59b1f8efabf7952b88b4340e3e30fd7d851847aa43c0d75748b5d89eb2677" Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.096368 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=15.141877428 podStartE2EDuration="16.096343804s" podCreationTimestamp="2026-01-23 13:52:06 +0000 UTC" firstStartedPulling="2026-01-23 13:52:08.57910843 +0000 UTC m=+1169.601646055" lastFinishedPulling="2026-01-23 13:52:09.533574806 +0000 UTC m=+1170.556112431" observedRunningTime="2026-01-23 13:52:22.073850351 +0000 UTC m=+1183.096387976" watchObservedRunningTime="2026-01-23 13:52:22.096343804 +0000 UTC m=+1183.118881429" Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.124716 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.145980 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-585dc95dd9-rtq4d"] Jan 23 13:52:22 crc kubenswrapper[4771]: I0123 13:52:22.466121 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c75694975-s585q" Jan 23 13:52:23 crc kubenswrapper[4771]: I0123 13:52:23.029703 4771 generic.go:334] "Generic (PLEG): container finished" podID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerID="86029f1d13655df0a06639549de5478e31c78d6ae13d0a7241c1830af3f28f4d" exitCode=143 Jan 23 13:52:23 crc kubenswrapper[4771]: I0123 13:52:23.029799 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerDied","Data":"86029f1d13655df0a06639549de5478e31c78d6ae13d0a7241c1830af3f28f4d"} Jan 23 13:52:23 crc kubenswrapper[4771]: I0123 13:52:23.032375 4771 generic.go:334] "Generic (PLEG): container finished" podID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerID="c317b37309f1dd8f35ba92e9d8dfde672279d895829ca97dce9a0fbfdaa0aa69" exitCode=0 Jan 23 13:52:23 crc kubenswrapper[4771]: I0123 13:52:23.032443 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerDied","Data":"c317b37309f1dd8f35ba92e9d8dfde672279d895829ca97dce9a0fbfdaa0aa69"} Jan 23 13:52:23 crc kubenswrapper[4771]: I0123 13:52:23.239853 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" path="/var/lib/kubelet/pods/bd25f0ad-f4d3-4333-8803-cc30734719f9/volumes" Jan 23 13:52:24 crc kubenswrapper[4771]: I0123 13:52:24.509798 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Jan 23 13:52:25 crc kubenswrapper[4771]: I0123 13:52:25.066924 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" exitCode=1 Jan 23 13:52:25 crc kubenswrapper[4771]: I0123 13:52:25.066990 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerDied","Data":"85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c"} Jan 23 13:52:25 crc kubenswrapper[4771]: I0123 13:52:25.067041 4771 scope.go:117] "RemoveContainer" containerID="ed64285768363f40d8a5f260897a92b40be384863b637bc51517b7501b82123c" Jan 23 13:52:25 crc kubenswrapper[4771]: I0123 13:52:25.067887 4771 scope.go:117] "RemoveContainer" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" Jan 23 13:52:25 crc kubenswrapper[4771]: E0123 13:52:25.068263 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ebafbd30-6f52-4209-b962-c97da4d4f9da)\"" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" Jan 23 13:52:26 crc kubenswrapper[4771]: I0123 13:52:26.890964 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.039381 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.100896 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105299 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api-log" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105343 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api-log" Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105396 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105436 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api" Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105466 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105478 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105520 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="init" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105532 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="init" Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105549 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-api" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105564 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-api" Jan 23 13:52:27 crc kubenswrapper[4771]: E0123 13:52:27.105586 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="dnsmasq-dns" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105598 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="dnsmasq-dns" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.105970 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0e6b33-af5f-449d-b51f-ba2725cedd3b" containerName="dnsmasq-dns" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.106025 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-api" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.106051 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api-log" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.106073 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" containerName="barbican-api" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.106092 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd25f0ad-f4d3-4333-8803-cc30734719f9" containerName="neutron-httpd" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.107161 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.115798 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.116213 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g6rxb" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.116383 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.146642 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.151218 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.151394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.151468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhqg\" (UniqueName: \"kubernetes.io/projected/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-kube-api-access-djhqg\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.151516 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.183116 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.254250 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.255481 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.255586 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.256606 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhqg\" (UniqueName: \"kubernetes.io/projected/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-kube-api-access-djhqg\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.256704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.266132 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.268815 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-openstack-config-secret\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.274579 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhqg\" (UniqueName: \"kubernetes.io/projected/d52cbbcd-9ccc-4f07-a407-15edb7bde07e-kube-api-access-djhqg\") pod \"openstackclient\" (UID: \"d52cbbcd-9ccc-4f07-a407-15edb7bde07e\") " pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.432854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 13:52:27 crc kubenswrapper[4771]: I0123 13:52:27.942559 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.006043 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.039752 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.039993 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="dnsmasq-dns" containerID="cri-o://ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80" gracePeriod=10 Jan 23 13:52:28 crc kubenswrapper[4771]: W0123 13:52:28.079701 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd52cbbcd_9ccc_4f07_a407_15edb7bde07e.slice/crio-25bae067c8627a17a71d43125bf3ba2b9a41806f8efd4131b644cc0e29619a01 WatchSource:0}: Error finding container 25bae067c8627a17a71d43125bf3ba2b9a41806f8efd4131b644cc0e29619a01: Status 404 returned error can't find the container with id 25bae067c8627a17a71d43125bf3ba2b9a41806f8efd4131b644cc0e29619a01 Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.189711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d52cbbcd-9ccc-4f07-a407-15edb7bde07e","Type":"ContainerStarted","Data":"25bae067c8627a17a71d43125bf3ba2b9a41806f8efd4131b644cc0e29619a01"} Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.189876 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="cinder-scheduler" containerID="cri-o://e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce" gracePeriod=30 Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.189986 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="probe" containerID="cri-o://bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba" gracePeriod=30 Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.507525 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.507587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:28 crc kubenswrapper[4771]: I0123 13:52:28.508398 4771 scope.go:117] "RemoveContainer" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" Jan 23 13:52:28 crc kubenswrapper[4771]: E0123 13:52:28.508693 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ebafbd30-6f52-4209-b962-c97da4d4f9da)\"" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.141452 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211218 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211290 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211340 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211403 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211491 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.211617 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bgr\" (UniqueName: \"kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr\") pod \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\" (UID: \"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18\") " Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.222743 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr" (OuterVolumeSpecName: "kube-api-access-k6bgr") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "kube-api-access-k6bgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.232703 4771 generic.go:334] "Generic (PLEG): container finished" podID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerID="ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80" exitCode=0 Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.258833 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.298666 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.316531 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bgr\" (UniqueName: \"kubernetes.io/projected/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-kube-api-access-k6bgr\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.316597 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.366587 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config" (OuterVolumeSpecName: "config") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.381425 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.388792 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.408647 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" (UID: "a6d1b2d8-f5fd-40e9-89ab-c637e6632a18"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.418814 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.418858 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.418870 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.418880 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.579237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" event={"ID":"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18","Type":"ContainerDied","Data":"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80"} Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.579314 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7f8875f-c2rr7" event={"ID":"a6d1b2d8-f5fd-40e9-89ab-c637e6632a18","Type":"ContainerDied","Data":"26a81e5ae7895a0a0c6748b97f7f388df8f3ebfa113b1be9765757d3de517beb"} Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.579340 4771 scope.go:117] "RemoveContainer" containerID="ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.610927 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.611902 4771 scope.go:117] "RemoveContainer" containerID="dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.625606 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7f8875f-c2rr7"] Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.654765 4771 scope.go:117] "RemoveContainer" containerID="ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80" Jan 23 13:52:29 crc kubenswrapper[4771]: E0123 13:52:29.655675 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80\": container with ID starting with ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80 not found: ID does not exist" containerID="ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.655723 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80"} err="failed to get container status \"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80\": rpc error: code = NotFound desc = could not find container \"ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80\": container with ID starting with ac2a07c21c00e5cf6f645c8c6acebdff678d33cee18124a9e3437d4a32583c80 not found: ID does not exist" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.655759 4771 scope.go:117] "RemoveContainer" containerID="dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8" Jan 23 13:52:29 crc kubenswrapper[4771]: E0123 13:52:29.656444 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8\": container with ID starting with dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8 not found: ID does not exist" containerID="dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8" Jan 23 13:52:29 crc kubenswrapper[4771]: I0123 13:52:29.656468 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8"} err="failed to get container status \"dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8\": rpc error: code = NotFound desc = could not find container \"dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8\": container with ID starting with dc00613b7ac2e5f2f6a6f810b963e56e5ac76eb34944808de8ff49278efe5da8 not found: ID does not exist" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.264110 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerID="bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba" exitCode=0 Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.264449 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerDied","Data":"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba"} Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.272142 4771 generic.go:334] "Generic (PLEG): container finished" podID="93183170-d32d-4633-a9b5-5740232e4da4" containerID="8762cfdc1fa334b043c97625e1fc97b183fd3bfeb09d437515b21cff0c5aa955" exitCode=0 Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.272207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerDied","Data":"8762cfdc1fa334b043c97625e1fc97b183fd3bfeb09d437515b21cff0c5aa955"} Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.435013 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-859b449dbf-hmxw5"] Jan 23 13:52:30 crc kubenswrapper[4771]: E0123 13:52:30.435564 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="init" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.435582 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="init" Jan 23 13:52:30 crc kubenswrapper[4771]: E0123 13:52:30.435655 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="dnsmasq-dns" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.435662 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="dnsmasq-dns" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.435856 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" containerName="dnsmasq-dns" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.437063 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.441085 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.441285 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.441402 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445264 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnpn\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-kube-api-access-wxnpn\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445379 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-log-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445449 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-etc-swift\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445469 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-combined-ca-bundle\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445562 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-internal-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445683 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-run-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445723 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-public-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.445846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-config-data\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.470170 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-859b449dbf-hmxw5"] Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.548087 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-log-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.548132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-etc-swift\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.548154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-combined-ca-bundle\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.548211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-internal-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.548650 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-log-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.549374 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-run-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.549799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-public-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.549914 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-config-data\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.549992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxnpn\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-kube-api-access-wxnpn\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.551008 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/34108489-1bd6-4b93-a840-b58d45b1e861-run-httpd\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.560085 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-config-data\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.567352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-combined-ca-bundle\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.567388 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-internal-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.576261 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-etc-swift\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.577600 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxnpn\" (UniqueName: \"kubernetes.io/projected/34108489-1bd6-4b93-a840-b58d45b1e861-kube-api-access-wxnpn\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.588639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34108489-1bd6-4b93-a840-b58d45b1e861-public-tls-certs\") pod \"swift-proxy-859b449dbf-hmxw5\" (UID: \"34108489-1bd6-4b93-a840-b58d45b1e861\") " pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.672208 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.753769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.753902 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc74c\" (UniqueName: \"kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754075 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754182 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754300 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754380 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754493 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data\") pod \"93183170-d32d-4633-a9b5-5740232e4da4\" (UID: \"93183170-d32d-4633-a9b5-5740232e4da4\") " Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.754711 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.755101 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.755126 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93183170-d32d-4633-a9b5-5740232e4da4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.758165 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.762577 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts" (OuterVolumeSpecName: "scripts") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.762838 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c" (OuterVolumeSpecName: "kube-api-access-bc74c") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "kube-api-access-bc74c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.792901 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.831682 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.856696 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.857475 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.857539 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.857593 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc74c\" (UniqueName: \"kubernetes.io/projected/93183170-d32d-4633-a9b5-5740232e4da4-kube-api-access-bc74c\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.875788 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data" (OuterVolumeSpecName: "config-data") pod "93183170-d32d-4633-a9b5-5740232e4da4" (UID: "93183170-d32d-4633-a9b5-5740232e4da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:30 crc kubenswrapper[4771]: I0123 13:52:30.960337 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93183170-d32d-4633-a9b5-5740232e4da4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.253082 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d1b2d8-f5fd-40e9-89ab-c637e6632a18" path="/var/lib/kubelet/pods/a6d1b2d8-f5fd-40e9-89ab-c637e6632a18/volumes" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.316521 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93183170-d32d-4633-a9b5-5740232e4da4","Type":"ContainerDied","Data":"7134d8ec5944480e9bdbe56aae45138b3bcc0bc778bf8609e2671a843061f0a9"} Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.316608 4771 scope.go:117] "RemoveContainer" containerID="2171b110e5f17015d272decfd5c3aac00e2162ba322460b8da870bae1b885cad" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.316624 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.387495 4771 scope.go:117] "RemoveContainer" containerID="8762cfdc1fa334b043c97625e1fc97b183fd3bfeb09d437515b21cff0c5aa955" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.428867 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.452366 4771 scope.go:117] "RemoveContainer" containerID="b000fac6131a392f545af2ebb68e9ce0cf352e051176ce409883d5161c6a4615" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.473491 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.632697 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:31 crc kubenswrapper[4771]: E0123 13:52:31.633532 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-central-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.633557 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-central-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: E0123 13:52:31.633673 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="sg-core" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.633682 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="sg-core" Jan 23 13:52:31 crc kubenswrapper[4771]: E0123 13:52:31.633703 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-notification-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.633710 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-notification-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.634375 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-notification-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.634516 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="sg-core" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.634530 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="93183170-d32d-4633-a9b5-5740232e4da4" containerName="ceilometer-central-agent" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.640284 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.649209 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.649401 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.655627 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.679818 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-859b449dbf-hmxw5"] Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.831639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.831828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.831978 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.832026 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.832091 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbkc\" (UniqueName: \"kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.832339 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.832518 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.909728 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:31 crc kubenswrapper[4771]: E0123 13:52:31.918606 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-5pbkc log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="2e2ac679-4a87-49cf-aa8d-d7d590736174" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934286 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pbkc\" (UniqueName: \"kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934361 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934462 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.934512 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.936660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.936985 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.940768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.940856 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.941367 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.945121 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:31 crc kubenswrapper[4771]: I0123 13:52:31.967177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pbkc\" (UniqueName: \"kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc\") pod \"ceilometer-0\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " pod="openstack/ceilometer-0" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.064873 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.328860 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-859b449dbf-hmxw5" event={"ID":"34108489-1bd6-4b93-a840-b58d45b1e861","Type":"ContainerStarted","Data":"81c0e96f0b1cd7328620324baae5b78687fe5f33d2b1fd28ff0dda0669b01ef7"} Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.329375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-859b449dbf-hmxw5" event={"ID":"34108489-1bd6-4b93-a840-b58d45b1e861","Type":"ContainerStarted","Data":"d3461b3324a6721ceb77499a8e59b13f2a76647c896fe69a4923842a4fdd78af"} Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.329390 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-859b449dbf-hmxw5" event={"ID":"34108489-1bd6-4b93-a840-b58d45b1e861","Type":"ContainerStarted","Data":"9e728164d510cbc0a481fdb290eff3dc98cf95a64c904961e217d1567bea4adb"} Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.335807 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.356249 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.369023 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-859b449dbf-hmxw5" podStartSLOduration=2.368997596 podStartE2EDuration="2.368997596s" podCreationTimestamp="2026-01-23 13:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:32.359016679 +0000 UTC m=+1193.381554324" watchObservedRunningTime="2026-01-23 13:52:32.368997596 +0000 UTC m=+1193.391535221" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549028 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549154 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549228 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549441 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549493 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549511 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pbkc\" (UniqueName: \"kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549550 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549588 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data\") pod \"2e2ac679-4a87-49cf-aa8d-d7d590736174\" (UID: \"2e2ac679-4a87-49cf-aa8d-d7d590736174\") " Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.549764 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.550160 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.550183 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e2ac679-4a87-49cf-aa8d-d7d590736174-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.557704 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data" (OuterVolumeSpecName: "config-data") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.558250 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts" (OuterVolumeSpecName: "scripts") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.558287 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.558373 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.560722 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc" (OuterVolumeSpecName: "kube-api-access-5pbkc") pod "2e2ac679-4a87-49cf-aa8d-d7d590736174" (UID: "2e2ac679-4a87-49cf-aa8d-d7d590736174"). InnerVolumeSpecName "kube-api-access-5pbkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.652251 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.652297 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.652319 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pbkc\" (UniqueName: \"kubernetes.io/projected/2e2ac679-4a87-49cf-aa8d-d7d590736174-kube-api-access-5pbkc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.652330 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:32 crc kubenswrapper[4771]: I0123 13:52:32.652341 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2ac679-4a87-49cf-aa8d-d7d590736174-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.078958 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.250358 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93183170-d32d-4633-a9b5-5740232e4da4" path="/var/lib/kubelet/pods/93183170-d32d-4633-a9b5-5740232e4da4/volumes" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.269578 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.269686 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.269830 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.269931 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d56tf\" (UniqueName: \"kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.269991 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.270180 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom\") pod \"ebcf6798-eeda-492f-b006-fd957b47f36e\" (UID: \"ebcf6798-eeda-492f-b006-fd957b47f36e\") " Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.275918 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.279625 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf" (OuterVolumeSpecName: "kube-api-access-d56tf") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "kube-api-access-d56tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.281610 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts" (OuterVolumeSpecName: "scripts") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.283064 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.342236 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.354283 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerID="e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce" exitCode=0 Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.354656 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerDied","Data":"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce"} Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.354732 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ebcf6798-eeda-492f-b006-fd957b47f36e","Type":"ContainerDied","Data":"05f14459054d48ae4107866d5d70b0859ed997c07e80898562f4bc3ad859c4aa"} Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.354758 4771 scope.go:117] "RemoveContainer" containerID="bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.354977 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.355582 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.356179 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.356226 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.376091 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d56tf\" (UniqueName: \"kubernetes.io/projected/ebcf6798-eeda-492f-b006-fd957b47f36e-kube-api-access-d56tf\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.376130 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.376141 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.376156 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.376166 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebcf6798-eeda-492f-b006-fd957b47f36e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.420443 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data" (OuterVolumeSpecName: "config-data") pod "ebcf6798-eeda-492f-b006-fd957b47f36e" (UID: "ebcf6798-eeda-492f-b006-fd957b47f36e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.479999 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcf6798-eeda-492f-b006-fd957b47f36e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.521879 4771 scope.go:117] "RemoveContainer" containerID="e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.533736 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.543694 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.566908 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: E0123 13:52:33.567534 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="probe" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.567552 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="probe" Jan 23 13:52:33 crc kubenswrapper[4771]: E0123 13:52:33.567586 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="cinder-scheduler" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.567593 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="cinder-scheduler" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.567788 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="cinder-scheduler" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.567809 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" containerName="probe" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.568997 4771 scope.go:117] "RemoveContainer" containerID="bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.570731 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: E0123 13:52:33.571809 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba\": container with ID starting with bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba not found: ID does not exist" containerID="bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.571840 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba"} err="failed to get container status \"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba\": rpc error: code = NotFound desc = could not find container \"bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba\": container with ID starting with bb2207fd5dd10d0ee3f658d52c3fa20d471e2a074ff83670cc98b342cc2ec5ba not found: ID does not exist" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.571870 4771 scope.go:117] "RemoveContainer" containerID="e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce" Jan 23 13:52:33 crc kubenswrapper[4771]: E0123 13:52:33.572246 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce\": container with ID starting with e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce not found: ID does not exist" containerID="e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.572277 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce"} err="failed to get container status \"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce\": rpc error: code = NotFound desc = could not find container \"e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce\": container with ID starting with e6c791baedda28ef4b6e66bded456b1e7aaa2139d882345dd261a1ab6f8a01ce not found: ID does not exist" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.575309 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.575507 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.590001 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690118 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690821 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690860 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj469\" (UniqueName: \"kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690921 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.690982 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.691030 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.736499 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.759053 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.774234 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.777829 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.796814 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.808498 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.808645 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.808750 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj469\" (UniqueName: \"kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.808836 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.811350 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.811457 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.811533 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.811823 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.812522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.828722 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.850871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.859272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.865650 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.872141 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.880457 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj469\" (UniqueName: \"kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469\") pod \"ceilometer-0\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.911207 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932142 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932180 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932219 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttcnj\" (UniqueName: \"kubernetes.io/projected/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-kube-api-access-ttcnj\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932246 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:33 crc kubenswrapper[4771]: I0123 13:52:33.932350 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034718 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034753 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034785 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttcnj\" (UniqueName: \"kubernetes.io/projected/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-kube-api-access-ttcnj\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034825 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.034891 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.035016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.043185 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.052352 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.056778 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-config-data\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.057498 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-scripts\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.072092 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttcnj\" (UniqueName: \"kubernetes.io/projected/eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b-kube-api-access-ttcnj\") pod \"cinder-scheduler-0\" (UID: \"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b\") " pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.239991 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.517892 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.555641 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.962513 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:34 crc kubenswrapper[4771]: I0123 13:52:34.977292 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 13:52:35 crc kubenswrapper[4771]: I0123 13:52:35.251324 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2ac679-4a87-49cf-aa8d-d7d590736174" path="/var/lib/kubelet/pods/2e2ac679-4a87-49cf-aa8d-d7d590736174/volumes" Jan 23 13:52:35 crc kubenswrapper[4771]: I0123 13:52:35.251811 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcf6798-eeda-492f-b006-fd957b47f36e" path="/var/lib/kubelet/pods/ebcf6798-eeda-492f-b006-fd957b47f36e/volumes" Jan 23 13:52:35 crc kubenswrapper[4771]: I0123 13:52:35.409380 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b","Type":"ContainerStarted","Data":"9d618fc9a6a8e0b4edfc44ee2ecdf31429d0a1abc4e6a7a91079072291f3e70f"} Jan 23 13:52:35 crc kubenswrapper[4771]: I0123 13:52:35.413262 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerStarted","Data":"080bfb2b2d36752e2e435f84cb065c7036d23a26594e3c1f7175f6607a692914"} Jan 23 13:52:36 crc kubenswrapper[4771]: I0123 13:52:36.075288 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fd76d6849-9jhnn" Jan 23 13:52:36 crc kubenswrapper[4771]: I0123 13:52:36.156675 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:52:36 crc kubenswrapper[4771]: I0123 13:52:36.156955 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fcb4fcfd8-xrpf8" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-api" containerID="cri-o://b836a0d0efd333e75b5c1fe07f5044a162ece8005a9eae7e94a569cee5821a88" gracePeriod=30 Jan 23 13:52:36 crc kubenswrapper[4771]: I0123 13:52:36.157423 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5fcb4fcfd8-xrpf8" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-httpd" containerID="cri-o://093e37b9fead0067733413c974f0062960f36dd697f478b80976511b3346bc5c" gracePeriod=30 Jan 23 13:52:36 crc kubenswrapper[4771]: I0123 13:52:36.464825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b","Type":"ContainerStarted","Data":"4b9e5dc3eb5eb3e11e14334c9b67e74c6b0f50a0051fb941d4dcd2395d0cb1a3"} Jan 23 13:52:37 crc kubenswrapper[4771]: I0123 13:52:37.486803 4771 generic.go:334] "Generic (PLEG): container finished" podID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerID="093e37b9fead0067733413c974f0062960f36dd697f478b80976511b3346bc5c" exitCode=0 Jan 23 13:52:37 crc kubenswrapper[4771]: I0123 13:52:37.486883 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerDied","Data":"093e37b9fead0067733413c974f0062960f36dd697f478b80976511b3346bc5c"} Jan 23 13:52:38 crc kubenswrapper[4771]: I0123 13:52:38.028787 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.192:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:40 crc kubenswrapper[4771]: I0123 13:52:40.766158 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:40 crc kubenswrapper[4771]: I0123 13:52:40.770250 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-859b449dbf-hmxw5" Jan 23 13:52:41 crc kubenswrapper[4771]: I0123 13:52:41.554380 4771 generic.go:334] "Generic (PLEG): container finished" podID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerID="b836a0d0efd333e75b5c1fe07f5044a162ece8005a9eae7e94a569cee5821a88" exitCode=0 Jan 23 13:52:41 crc kubenswrapper[4771]: I0123 13:52:41.554695 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerDied","Data":"b836a0d0efd333e75b5c1fe07f5044a162ece8005a9eae7e94a569cee5821a88"} Jan 23 13:52:41 crc kubenswrapper[4771]: I0123 13:52:41.670431 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:41 crc kubenswrapper[4771]: I0123 13:52:41.671271 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-log" containerID="cri-o://02c8d4145423a8dc0a6b97fb6bb64412e61d5058cf7485df16a04a69c93c139f" gracePeriod=30 Jan 23 13:52:41 crc kubenswrapper[4771]: I0123 13:52:41.671377 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-httpd" containerID="cri-o://f2a04affa6a1bdccb7c160162a760998b648e41709e2b16b09195df8a4863b71" gracePeriod=30 Jan 23 13:52:42 crc kubenswrapper[4771]: I0123 13:52:42.569302 4771 generic.go:334] "Generic (PLEG): container finished" podID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerID="02c8d4145423a8dc0a6b97fb6bb64412e61d5058cf7485df16a04a69c93c139f" exitCode=143 Jan 23 13:52:42 crc kubenswrapper[4771]: I0123 13:52:42.569807 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerDied","Data":"02c8d4145423a8dc0a6b97fb6bb64412e61d5058cf7485df16a04a69c93c139f"} Jan 23 13:52:43 crc kubenswrapper[4771]: I0123 13:52:43.244316 4771 scope.go:117] "RemoveContainer" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" Jan 23 13:52:43 crc kubenswrapper[4771]: E0123 13:52:43.245920 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ebafbd30-6f52-4209-b962-c97da4d4f9da)\"" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" Jan 23 13:52:43 crc kubenswrapper[4771]: I0123 13:52:43.598952 4771 generic.go:334] "Generic (PLEG): container finished" podID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerID="f2a04affa6a1bdccb7c160162a760998b648e41709e2b16b09195df8a4863b71" exitCode=0 Jan 23 13:52:43 crc kubenswrapper[4771]: I0123 13:52:43.599030 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerDied","Data":"f2a04affa6a1bdccb7c160162a760998b648e41709e2b16b09195df8a4863b71"} Jan 23 13:52:44 crc kubenswrapper[4771]: I0123 13:52:44.509656 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-99f77f8d8-2j9s2" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Jan 23 13:52:44 crc kubenswrapper[4771]: I0123 13:52:44.511688 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:52:45 crc kubenswrapper[4771]: E0123 13:52:45.366266 4771 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 23 13:52:45 crc kubenswrapper[4771]: E0123 13:52:45.366387 4771 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.129.56.240:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Jan 23 13:52:45 crc kubenswrapper[4771]: E0123 13:52:45.366609 4771 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.129.56.240:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f7h59fh6bhd6h67ch5f7h679h85h59ch5b7hc6h559h9dh96h99h669h577h5hf7hd5h568hd8h5ddh6h649h9fh648hb7hcch84h568h56fq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djhqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(d52cbbcd-9ccc-4f07-a407-15edb7bde07e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 13:52:45 crc kubenswrapper[4771]: E0123 13:52:45.367837 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="d52cbbcd-9ccc-4f07-a407-15edb7bde07e" Jan 23 13:52:45 crc kubenswrapper[4771]: E0123 13:52:45.637762 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.240:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="d52cbbcd-9ccc-4f07-a407-15edb7bde07e" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.018458 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.077374 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.214873 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config\") pod \"62e350a1-5498-4d62-9d4a-3382d3ed1369\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.214930 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs\") pod \"62e350a1-5498-4d62-9d4a-3382d3ed1369\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215061 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215099 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215134 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87nsc\" (UniqueName: \"kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc\") pod \"62e350a1-5498-4d62-9d4a-3382d3ed1369\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215163 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config\") pod \"62e350a1-5498-4d62-9d4a-3382d3ed1369\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215183 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215229 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.215943 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216017 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216112 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6677p\" (UniqueName: \"kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p\") pod \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\" (UID: \"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216189 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle\") pod \"62e350a1-5498-4d62-9d4a-3382d3ed1369\" (UID: \"62e350a1-5498-4d62-9d4a-3382d3ed1369\") " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216292 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs" (OuterVolumeSpecName: "logs") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216688 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.216705 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.225633 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc" (OuterVolumeSpecName: "kube-api-access-87nsc") pod "62e350a1-5498-4d62-9d4a-3382d3ed1369" (UID: "62e350a1-5498-4d62-9d4a-3382d3ed1369"). InnerVolumeSpecName "kube-api-access-87nsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.232196 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts" (OuterVolumeSpecName: "scripts") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.233988 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "62e350a1-5498-4d62-9d4a-3382d3ed1369" (UID: "62e350a1-5498-4d62-9d4a-3382d3ed1369"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.244616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.247026 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p" (OuterVolumeSpecName: "kube-api-access-6677p") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "kube-api-access-6677p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.280665 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.319912 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.319967 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.319981 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87nsc\" (UniqueName: \"kubernetes.io/projected/62e350a1-5498-4d62-9d4a-3382d3ed1369-kube-api-access-87nsc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.319996 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.320009 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.320021 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6677p\" (UniqueName: \"kubernetes.io/projected/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-kube-api-access-6677p\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.409586 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config" (OuterVolumeSpecName: "config") pod "62e350a1-5498-4d62-9d4a-3382d3ed1369" (UID: "62e350a1-5498-4d62-9d4a-3382d3ed1369"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.424314 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.438617 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.479825 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data" (OuterVolumeSpecName: "config-data") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.497341 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62e350a1-5498-4d62-9d4a-3382d3ed1369" (UID: "62e350a1-5498-4d62-9d4a-3382d3ed1369"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.497760 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" (UID: "cf755a42-34c6-4b24-a2c5-3ab3296d8bdc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.527061 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.527130 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.527146 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.527160 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.540616 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "62e350a1-5498-4d62-9d4a-3382d3ed1369" (UID: "62e350a1-5498-4d62-9d4a-3382d3ed1369"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.629186 4771 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/62e350a1-5498-4d62-9d4a-3382d3ed1369-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.645312 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerStarted","Data":"a22868f61cc898341f78672e798992c2d3fb258f887a3ea75640b043a476e50e"} Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.645379 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerStarted","Data":"48a04ffdf869bbaccc67192069e6e5b56844de2651764fd24e866e2045656a11"} Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.648145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cf755a42-34c6-4b24-a2c5-3ab3296d8bdc","Type":"ContainerDied","Data":"1dd0c4604533e99ba7ce4ab91d0d88b53721208932366ba691a0ae8362c412e6"} Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.648183 4771 scope.go:117] "RemoveContainer" containerID="f2a04affa6a1bdccb7c160162a760998b648e41709e2b16b09195df8a4863b71" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.648202 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.658493 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b","Type":"ContainerStarted","Data":"fb2c8565b02061ca8ab31e3d365a4e45524dedd2649d07fbb157e1a70c0e8446"} Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.676094 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fcb4fcfd8-xrpf8" event={"ID":"62e350a1-5498-4d62-9d4a-3382d3ed1369","Type":"ContainerDied","Data":"82a483c81bf1c7e9df33d3b3fe6aa29d03bd3b04d0ed2246e56369acee56a99d"} Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.676254 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fcb4fcfd8-xrpf8" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.704551 4771 scope.go:117] "RemoveContainer" containerID="02c8d4145423a8dc0a6b97fb6bb64412e61d5058cf7485df16a04a69c93c139f" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.707683 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=13.707642182 podStartE2EDuration="13.707642182s" podCreationTimestamp="2026-01-23 13:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:46.678754596 +0000 UTC m=+1207.701292221" watchObservedRunningTime="2026-01-23 13:52:46.707642182 +0000 UTC m=+1207.730179807" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.744833 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.751740 4771 scope.go:117] "RemoveContainer" containerID="093e37b9fead0067733413c974f0062960f36dd697f478b80976511b3346bc5c" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.759498 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.769518 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:46 crc kubenswrapper[4771]: E0123 13:52:46.770173 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770198 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: E0123 13:52:46.770213 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-api" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770221 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-api" Jan 23 13:52:46 crc kubenswrapper[4771]: E0123 13:52:46.770235 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770246 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: E0123 13:52:46.770265 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-log" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770272 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-log" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770539 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770569 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-api" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770589 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" containerName="glance-log" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.770603 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" containerName="neutron-httpd" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.772047 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.777878 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.778194 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.778310 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.793137 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5fcb4fcfd8-xrpf8"] Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.814367 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.831649 4771 scope.go:117] "RemoveContainer" containerID="b836a0d0efd333e75b5c1fe07f5044a162ece8005a9eae7e94a569cee5821a88" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938247 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938699 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938737 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938780 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938809 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csnj\" (UniqueName: \"kubernetes.io/projected/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-kube-api-access-2csnj\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:46 crc kubenswrapper[4771]: I0123 13:52:46.938995 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-logs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.041670 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.041821 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-logs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042001 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042064 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042090 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042133 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042165 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042200 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csnj\" (UniqueName: \"kubernetes.io/projected/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-kube-api-access-2csnj\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.042675 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-logs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.043325 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.044229 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.049360 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.049973 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.050239 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.056136 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.065016 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csnj\" (UniqueName: \"kubernetes.io/projected/361f4847-2b8b-40d4-b0cf-2eca9dc1c5db-kube-api-access-2csnj\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.083359 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db\") " pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.119286 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.248890 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e350a1-5498-4d62-9d4a-3382d3ed1369" path="/var/lib/kubelet/pods/62e350a1-5498-4d62-9d4a-3382d3ed1369/volumes" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.250963 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf755a42-34c6-4b24-a2c5-3ab3296d8bdc" path="/var/lib/kubelet/pods/cf755a42-34c6-4b24-a2c5-3ab3296d8bdc/volumes" Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.693474 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerStarted","Data":"f738fea877a811121270de40352c1d2337ef260968876e862b62a51354bba927"} Jan 23 13:52:47 crc kubenswrapper[4771]: I0123 13:52:47.863257 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 13:52:48 crc kubenswrapper[4771]: I0123 13:52:48.506626 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:48 crc kubenswrapper[4771]: I0123 13:52:48.507159 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:48 crc kubenswrapper[4771]: I0123 13:52:48.508424 4771 scope.go:117] "RemoveContainer" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" Jan 23 13:52:48 crc kubenswrapper[4771]: I0123 13:52:48.724207 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db","Type":"ContainerStarted","Data":"4b3211002f0cdb6312dc641a262763f50cdb6e631bfcf6c33e567d3d65cc0e92"} Jan 23 13:52:48 crc kubenswrapper[4771]: I0123 13:52:48.724272 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db","Type":"ContainerStarted","Data":"ce3c2819c1b7a8a48b52234df56599cf1c8c0381e954b4ec6f917c79c9113048"} Jan 23 13:52:49 crc kubenswrapper[4771]: I0123 13:52:49.251144 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 13:52:49 crc kubenswrapper[4771]: I0123 13:52:49.456324 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 13:52:49 crc kubenswrapper[4771]: I0123 13:52:49.738747 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerStarted","Data":"7dfd3ffe2bc11f3fbc6ea00e102c90f8f98b6ef5eefecf4ea870b9516452b295"} Jan 23 13:52:51 crc kubenswrapper[4771]: I0123 13:52:51.466750 4771 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podc55614ed-18f8-4dab-a774-c161ab25107a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podc55614ed-18f8-4dab-a774-c161ab25107a] : Timed out while waiting for systemd to remove kubepods-besteffort-podc55614ed_18f8_4dab_a774_c161ab25107a.slice" Jan 23 13:52:51 crc kubenswrapper[4771]: E0123 13:52:51.466837 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podc55614ed-18f8-4dab-a774-c161ab25107a] : unable to destroy cgroup paths for cgroup [kubepods besteffort podc55614ed-18f8-4dab-a774-c161ab25107a] : Timed out while waiting for systemd to remove kubepods-besteffort-podc55614ed_18f8_4dab_a774_c161ab25107a.slice" pod="openstack/barbican-api-798cb98666-gbkq6" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" Jan 23 13:52:51 crc kubenswrapper[4771]: I0123 13:52:51.766786 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerStarted","Data":"6c5be730524566fc65e3b52c4996494d02387f4b8fb11fe6759499fcec898993"} Jan 23 13:52:51 crc kubenswrapper[4771]: I0123 13:52:51.766843 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-798cb98666-gbkq6" Jan 23 13:52:51 crc kubenswrapper[4771]: I0123 13:52:51.809526 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:51 crc kubenswrapper[4771]: I0123 13:52:51.825091 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-798cb98666-gbkq6"] Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.816309 4771 generic.go:334] "Generic (PLEG): container finished" podID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerID="2afd31066240e94aa240c0d85614a362530e127adcbc5ed5dbe9b1eaade7ebfd" exitCode=137 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.816602 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerDied","Data":"2afd31066240e94aa240c0d85614a362530e127adcbc5ed5dbe9b1eaade7ebfd"} Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.820839 4771 generic.go:334] "Generic (PLEG): container finished" podID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerID="9814d7c6b6d61cbea60452dd6badd1b2a63010fe46f19d48224ba55d405e6481" exitCode=137 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.820925 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerDied","Data":"9814d7c6b6d61cbea60452dd6badd1b2a63010fe46f19d48224ba55d405e6481"} Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.826673 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361f4847-2b8b-40d4-b0cf-2eca9dc1c5db","Type":"ContainerStarted","Data":"aa142e12ad42597b2ef284f3c84bf4a05d213610370af817ed0bf6b054e782ec"} Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.827422 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-central-agent" containerID="cri-o://48a04ffdf869bbaccc67192069e6e5b56844de2651764fd24e866e2045656a11" gracePeriod=30 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.827578 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="proxy-httpd" containerID="cri-o://6c5be730524566fc65e3b52c4996494d02387f4b8fb11fe6759499fcec898993" gracePeriod=30 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.827640 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="sg-core" containerID="cri-o://f738fea877a811121270de40352c1d2337ef260968876e862b62a51354bba927" gracePeriod=30 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.827684 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-notification-agent" containerID="cri-o://a22868f61cc898341f78672e798992c2d3fb258f887a3ea75640b043a476e50e" gracePeriod=30 Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.877132 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.877106341 podStartE2EDuration="6.877106341s" podCreationTimestamp="2026-01-23 13:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:52.854591697 +0000 UTC m=+1213.877129332" watchObservedRunningTime="2026-01-23 13:52:52.877106341 +0000 UTC m=+1213.899643966" Jan 23 13:52:52 crc kubenswrapper[4771]: I0123 13:52:52.903430 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.483314048 podStartE2EDuration="19.903379163s" podCreationTimestamp="2026-01-23 13:52:33 +0000 UTC" firstStartedPulling="2026-01-23 13:52:34.599752623 +0000 UTC m=+1195.622290248" lastFinishedPulling="2026-01-23 13:52:49.019817738 +0000 UTC m=+1210.042355363" observedRunningTime="2026-01-23 13:52:52.893326044 +0000 UTC m=+1213.915863689" watchObservedRunningTime="2026-01-23 13:52:52.903379163 +0000 UTC m=+1213.925916808" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.073805 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.164934 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165132 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsbnm\" (UniqueName: \"kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165275 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165465 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.165530 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle\") pod \"10c5f724-de62-4d78-be40-47f2a2e11eb6\" (UID: \"10c5f724-de62-4d78-be40-47f2a2e11eb6\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.167918 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs" (OuterVolumeSpecName: "logs") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.175762 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm" (OuterVolumeSpecName: "kube-api-access-lsbnm") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "kube-api-access-lsbnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.176933 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.200643 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.203120 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts" (OuterVolumeSpecName: "scripts") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.211791 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data" (OuterVolumeSpecName: "config-data") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.237402 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.257785 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "10c5f724-de62-4d78-be40-47f2a2e11eb6" (UID: "10c5f724-de62-4d78-be40-47f2a2e11eb6"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.264858 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55614ed-18f8-4dab-a774-c161ab25107a" path="/var/lib/kubelet/pods/c55614ed-18f8-4dab-a774-c161ab25107a/volumes" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268090 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz97q\" (UniqueName: \"kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268169 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268266 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268765 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268815 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.268887 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.269001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts\") pod \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\" (UID: \"8b3923e5-ae72-46e8-a077-6ac4f4481a68\") " Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.269974 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270000 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270011 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsbnm\" (UniqueName: \"kubernetes.io/projected/10c5f724-de62-4d78-be40-47f2a2e11eb6-kube-api-access-lsbnm\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270024 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270032 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c5f724-de62-4d78-be40-47f2a2e11eb6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270042 4771 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c5f724-de62-4d78-be40-47f2a2e11eb6-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.270050 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10c5f724-de62-4d78-be40-47f2a2e11eb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.272486 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.272650 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs" (OuterVolumeSpecName: "logs") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.283078 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q" (OuterVolumeSpecName: "kube-api-access-bz97q") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "kube-api-access-bz97q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.284849 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.286744 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts" (OuterVolumeSpecName: "scripts") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.314537 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.346162 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data" (OuterVolumeSpecName: "config-data") pod "8b3923e5-ae72-46e8-a077-6ac4f4481a68" (UID: "8b3923e5-ae72-46e8-a077-6ac4f4481a68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377312 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377353 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377365 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz97q\" (UniqueName: \"kubernetes.io/projected/8b3923e5-ae72-46e8-a077-6ac4f4481a68-kube-api-access-bz97q\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377377 4771 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b3923e5-ae72-46e8-a077-6ac4f4481a68-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377388 4771 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377399 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b3923e5-ae72-46e8-a077-6ac4f4481a68-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.377407 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3923e5-ae72-46e8-a077-6ac4f4481a68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.846907 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-99f77f8d8-2j9s2" event={"ID":"10c5f724-de62-4d78-be40-47f2a2e11eb6","Type":"ContainerDied","Data":"a7a3f4a5fe6a5bc9b065d9078be9ea7ae9d5f39fd5ad1fd797283db145473632"} Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.846949 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-99f77f8d8-2j9s2" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.846994 4771 scope.go:117] "RemoveContainer" containerID="c317b37309f1dd8f35ba92e9d8dfde672279d895829ca97dce9a0fbfdaa0aa69" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.857365 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8b3923e5-ae72-46e8-a077-6ac4f4481a68","Type":"ContainerDied","Data":"3a0ca57497a339be8decb6291b0f4b56916bdd120a28f37398a9b649247db287"} Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.857436 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.863655 4771 generic.go:334] "Generic (PLEG): container finished" podID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerID="6c5be730524566fc65e3b52c4996494d02387f4b8fb11fe6759499fcec898993" exitCode=0 Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.863705 4771 generic.go:334] "Generic (PLEG): container finished" podID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerID="f738fea877a811121270de40352c1d2337ef260968876e862b62a51354bba927" exitCode=2 Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.863721 4771 generic.go:334] "Generic (PLEG): container finished" podID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerID="a22868f61cc898341f78672e798992c2d3fb258f887a3ea75640b043a476e50e" exitCode=0 Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.864609 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerDied","Data":"6c5be730524566fc65e3b52c4996494d02387f4b8fb11fe6759499fcec898993"} Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.864665 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerDied","Data":"f738fea877a811121270de40352c1d2337ef260968876e862b62a51354bba927"} Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.864682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerDied","Data":"a22868f61cc898341f78672e798992c2d3fb258f887a3ea75640b043a476e50e"} Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.933675 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.945572 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-99f77f8d8-2j9s2"] Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.954757 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.968713 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.983464 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:53 crc kubenswrapper[4771]: E0123 13:52:53.984031 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api-log" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984052 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api-log" Jan 23 13:52:53 crc kubenswrapper[4771]: E0123 13:52:53.984066 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984075 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" Jan 23 13:52:53 crc kubenswrapper[4771]: E0123 13:52:53.984132 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984141 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" Jan 23 13:52:53 crc kubenswrapper[4771]: E0123 13:52:53.984153 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon-log" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984160 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon-log" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984361 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984376 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984394 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" containerName="horizon-log" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.984418 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api-log" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.986340 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.989965 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.990236 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 13:52:53 crc kubenswrapper[4771]: I0123 13:52:53.990247 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.024779 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093169 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093468 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-scripts\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093642 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d81fb1e-8409-4355-8ffc-58fa97951a58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093821 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d81fb1e-8409-4355-8ffc-58fa97951a58-logs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.093950 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.094018 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.094214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9vq\" (UniqueName: \"kubernetes.io/projected/5d81fb1e-8409-4355-8ffc-58fa97951a58-kube-api-access-nl9vq\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.106007 4771 scope.go:117] "RemoveContainer" containerID="2afd31066240e94aa240c0d85614a362530e127adcbc5ed5dbe9b1eaade7ebfd" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.129954 4771 scope.go:117] "RemoveContainer" containerID="9814d7c6b6d61cbea60452dd6badd1b2a63010fe46f19d48224ba55d405e6481" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.165665 4771 scope.go:117] "RemoveContainer" containerID="86029f1d13655df0a06639549de5478e31c78d6ae13d0a7241c1830af3f28f4d" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201580 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201669 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201739 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-scripts\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201846 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d81fb1e-8409-4355-8ffc-58fa97951a58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201920 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d81fb1e-8409-4355-8ffc-58fa97951a58-logs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.201997 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.202088 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl9vq\" (UniqueName: \"kubernetes.io/projected/5d81fb1e-8409-4355-8ffc-58fa97951a58-kube-api-access-nl9vq\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.205569 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d81fb1e-8409-4355-8ffc-58fa97951a58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.211462 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.214987 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.215375 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d81fb1e-8409-4355-8ffc-58fa97951a58-logs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.218812 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.218892 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-scripts\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.221816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.225218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d81fb1e-8409-4355-8ffc-58fa97951a58-config-data\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.226094 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl9vq\" (UniqueName: \"kubernetes.io/projected/5d81fb1e-8409-4355-8ffc-58fa97951a58-kube-api-access-nl9vq\") pod \"cinder-api-0\" (UID: \"5d81fb1e-8409-4355-8ffc-58fa97951a58\") " pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.310706 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.794234 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.794906 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-log" containerID="cri-o://33f68cc244fb43838ac5cb51c2052e354ead4c115dfb9bcee5098d66f9fd4411" gracePeriod=30 Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.795098 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-httpd" containerID="cri-o://a5ae9e2900d5d17c528f906a6b0ef104cf5f0c080873ec30ca11cd115b8eb912" gracePeriod=30 Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.822680 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 13:52:54 crc kubenswrapper[4771]: W0123 13:52:54.828934 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d81fb1e_8409_4355_8ffc_58fa97951a58.slice/crio-2998a71e8552e1013fa6f298878bd913821f2bf77edb76d4306c61d9f44f0dc8 WatchSource:0}: Error finding container 2998a71e8552e1013fa6f298878bd913821f2bf77edb76d4306c61d9f44f0dc8: Status 404 returned error can't find the container with id 2998a71e8552e1013fa6f298878bd913821f2bf77edb76d4306c61d9f44f0dc8 Jan 23 13:52:54 crc kubenswrapper[4771]: I0123 13:52:54.884586 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d81fb1e-8409-4355-8ffc-58fa97951a58","Type":"ContainerStarted","Data":"2998a71e8552e1013fa6f298878bd913821f2bf77edb76d4306c61d9f44f0dc8"} Jan 23 13:52:55 crc kubenswrapper[4771]: I0123 13:52:55.253245 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c5f724-de62-4d78-be40-47f2a2e11eb6" path="/var/lib/kubelet/pods/10c5f724-de62-4d78-be40-47f2a2e11eb6/volumes" Jan 23 13:52:55 crc kubenswrapper[4771]: I0123 13:52:55.254518 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" path="/var/lib/kubelet/pods/8b3923e5-ae72-46e8-a077-6ac4f4481a68/volumes" Jan 23 13:52:55 crc kubenswrapper[4771]: I0123 13:52:55.899220 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d81fb1e-8409-4355-8ffc-58fa97951a58","Type":"ContainerStarted","Data":"5e4902b1a497035f91b8d1c0747d1841db4f9804a6523a5b0488f490ba73cd2b"} Jan 23 13:52:55 crc kubenswrapper[4771]: I0123 13:52:55.908915 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerID="33f68cc244fb43838ac5cb51c2052e354ead4c115dfb9bcee5098d66f9fd4411" exitCode=143 Jan 23 13:52:55 crc kubenswrapper[4771]: I0123 13:52:55.908986 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerDied","Data":"33f68cc244fb43838ac5cb51c2052e354ead4c115dfb9bcee5098d66f9fd4411"} Jan 23 13:52:56 crc kubenswrapper[4771]: I0123 13:52:56.925234 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerID="a5ae9e2900d5d17c528f906a6b0ef104cf5f0c080873ec30ca11cd115b8eb912" exitCode=0 Jan 23 13:52:56 crc kubenswrapper[4771]: I0123 13:52:56.925757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerDied","Data":"a5ae9e2900d5d17c528f906a6b0ef104cf5f0c080873ec30ca11cd115b8eb912"} Jan 23 13:52:56 crc kubenswrapper[4771]: I0123 13:52:56.928237 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d81fb1e-8409-4355-8ffc-58fa97951a58","Type":"ContainerStarted","Data":"b1459226031f11924f6fab703ada13f873d2d563ed5c97c8aecbdd93ed6ef6bf"} Jan 23 13:52:56 crc kubenswrapper[4771]: I0123 13:52:56.930116 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 13:52:56 crc kubenswrapper[4771]: I0123 13:52:56.998746 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.998715077 podStartE2EDuration="3.998715077s" podCreationTimestamp="2026-01-23 13:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:52:56.989809445 +0000 UTC m=+1218.012347090" watchObservedRunningTime="2026-01-23 13:52:56.998715077 +0000 UTC m=+1218.021252702" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.120270 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.120370 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.162677 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.179284 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.404698 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.509625 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.509741 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.509880 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.509920 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvmsn\" (UniqueName: \"kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.509953 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.510048 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.510082 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.510143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\" (UID: \"1e4a6097-fbe0-4d82-8211-f76c15aa9e85\") " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.512739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs" (OuterVolumeSpecName: "logs") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.512764 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.522334 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn" (OuterVolumeSpecName: "kube-api-access-pvmsn") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "kube-api-access-pvmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.523670 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.540282 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts" (OuterVolumeSpecName: "scripts") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.559532 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.589524 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data" (OuterVolumeSpecName: "config-data") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.597677 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1e4a6097-fbe0-4d82-8211-f76c15aa9e85" (UID: "1e4a6097-fbe0-4d82-8211-f76c15aa9e85"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612308 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612347 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612359 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612374 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612388 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvmsn\" (UniqueName: \"kubernetes.io/projected/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-kube-api-access-pvmsn\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612400 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612427 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.612444 4771 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e4a6097-fbe0-4d82-8211-f76c15aa9e85-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.641379 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.715898 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.968331 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.969745 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e4a6097-fbe0-4d82-8211-f76c15aa9e85","Type":"ContainerDied","Data":"deb1395d6cdbf733141ce5fda30d88f3c2c78b3444269cc1ce059e2b14ac2180"} Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.970063 4771 scope.go:117] "RemoveContainer" containerID="a5ae9e2900d5d17c528f906a6b0ef104cf5f0c080873ec30ca11cd115b8eb912" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.976613 4771 generic.go:334] "Generic (PLEG): container finished" podID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerID="48a04ffdf869bbaccc67192069e6e5b56844de2651764fd24e866e2045656a11" exitCode=0 Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.977134 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerDied","Data":"48a04ffdf869bbaccc67192069e6e5b56844de2651764fd24e866e2045656a11"} Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.977534 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.977955 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 13:52:57 crc kubenswrapper[4771]: I0123 13:52:57.989809 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8b3923e5-ae72-46e8-a077-6ac4f4481a68" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.192:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.005264 4771 scope.go:117] "RemoveContainer" containerID="33f68cc244fb43838ac5cb51c2052e354ead4c115dfb9bcee5098d66f9fd4411" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.026087 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.043135 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.056196 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:58 crc kubenswrapper[4771]: E0123 13:52:58.057269 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-log" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.057293 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-log" Jan 23 13:52:58 crc kubenswrapper[4771]: E0123 13:52:58.057310 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-httpd" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.057318 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-httpd" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.057626 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-log" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.061506 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" containerName="glance-httpd" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.063330 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.065155 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.074482 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.074758 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152757 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152852 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152901 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152935 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-logs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.152974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.153005 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpgb\" (UniqueName: \"kubernetes.io/projected/ff136ca3-c0df-418b-b38f-9fed67d6ab21-kube-api-access-5tpgb\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.153022 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260143 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260274 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpgb\" (UniqueName: \"kubernetes.io/projected/ff136ca3-c0df-418b-b38f-9fed67d6ab21-kube-api-access-5tpgb\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260308 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260438 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260518 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.260702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-logs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.261520 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-logs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.261948 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.261996 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff136ca3-c0df-418b-b38f-9fed67d6ab21-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.278578 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.279189 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.279393 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.292342 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff136ca3-c0df-418b-b38f-9fed67d6ab21-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.311238 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpgb\" (UniqueName: \"kubernetes.io/projected/ff136ca3-c0df-418b-b38f-9fed67d6ab21-kube-api-access-5tpgb\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.335090 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ff136ca3-c0df-418b-b38f-9fed67d6ab21\") " pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.404926 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.507337 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.524326 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.565683 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.571279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.572543 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.572611 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.572708 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.572778 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.572989 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj469\" (UniqueName: \"kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.573078 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd\") pod \"0733fe58-0f4d-46d7-ac56-7382dba66c56\" (UID: \"0733fe58-0f4d-46d7-ac56-7382dba66c56\") " Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.573833 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.575173 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.580079 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469" (OuterVolumeSpecName: "kube-api-access-hj469") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "kube-api-access-hj469". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.580100 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts" (OuterVolumeSpecName: "scripts") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.583951 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.622916 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.677091 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj469\" (UniqueName: \"kubernetes.io/projected/0733fe58-0f4d-46d7-ac56-7382dba66c56-kube-api-access-hj469\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.677117 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0733fe58-0f4d-46d7-ac56-7382dba66c56-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.677128 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.677139 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.722211 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.751495 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data" (OuterVolumeSpecName: "config-data") pod "0733fe58-0f4d-46d7-ac56-7382dba66c56" (UID: "0733fe58-0f4d-46d7-ac56-7382dba66c56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.779501 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.779553 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0733fe58-0f4d-46d7-ac56-7382dba66c56-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.995938 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0733fe58-0f4d-46d7-ac56-7382dba66c56","Type":"ContainerDied","Data":"080bfb2b2d36752e2e435f84cb065c7036d23a26594e3c1f7175f6607a692914"} Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.996018 4771 scope.go:117] "RemoveContainer" containerID="6c5be730524566fc65e3b52c4996494d02387f4b8fb11fe6759499fcec898993" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.996281 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:58 crc kubenswrapper[4771]: I0123 13:52:58.997323 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.048145 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.056102 4771 scope.go:117] "RemoveContainer" containerID="f738fea877a811121270de40352c1d2337ef260968876e862b62a51354bba927" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.060709 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.076671 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.114107 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:59 crc kubenswrapper[4771]: E0123 13:52:59.114791 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="sg-core" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.114827 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="sg-core" Jan 23 13:52:59 crc kubenswrapper[4771]: E0123 13:52:59.114861 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="proxy-httpd" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.114868 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="proxy-httpd" Jan 23 13:52:59 crc kubenswrapper[4771]: E0123 13:52:59.114895 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-notification-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.114905 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-notification-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: E0123 13:52:59.114915 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-central-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.114922 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-central-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.115181 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="sg-core" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.115218 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-notification-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.115228 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="proxy-httpd" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.115238 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" containerName="ceilometer-central-agent" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.117371 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.121879 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.122043 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.130045 4771 scope.go:117] "RemoveContainer" containerID="a22868f61cc898341f78672e798992c2d3fb258f887a3ea75640b043a476e50e" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.152840 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188653 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188709 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188729 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188780 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188818 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188854 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmp8\" (UniqueName: \"kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.188899 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.212771 4771 scope.go:117] "RemoveContainer" containerID="48a04ffdf869bbaccc67192069e6e5b56844de2651764fd24e866e2045656a11" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.287483 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0733fe58-0f4d-46d7-ac56-7382dba66c56" path="/var/lib/kubelet/pods/0733fe58-0f4d-46d7-ac56-7382dba66c56/volumes" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.288388 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4a6097-fbe0-4d82-8211-f76c15aa9e85" path="/var/lib/kubelet/pods/1e4a6097-fbe0-4d82-8211-f76c15aa9e85/volumes" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.289359 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292284 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292332 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292361 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292445 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292483 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292521 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmp8\" (UniqueName: \"kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.292595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.293090 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.302969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.310970 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.312715 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.331194 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.333670 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.334275 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmp8\" (UniqueName: \"kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8\") pod \"ceilometer-0\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " pod="openstack/ceilometer-0" Jan 23 13:52:59 crc kubenswrapper[4771]: I0123 13:52:59.443087 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.032356 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.033282 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.032529 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff136ca3-c0df-418b-b38f-9fed67d6ab21","Type":"ContainerStarted","Data":"f9b3373ad161e1a49eab3d7ee554f882ce8640868009e5a6afdc89082300ceaa"} Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.079073 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:00 crc kubenswrapper[4771]: W0123 13:53:00.085353 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc99c1e3_2a66_4ab2_9b34_fabc7b512bf9.slice/crio-f205ad738b13bc38a4a3a3a6f3569fa7bcedc2cb625372e4a08e9401d34332d3 WatchSource:0}: Error finding container f205ad738b13bc38a4a3a3a6f3569fa7bcedc2cb625372e4a08e9401d34332d3: Status 404 returned error can't find the container with id f205ad738b13bc38a4a3a3a6f3569fa7bcedc2cb625372e4a08e9401d34332d3 Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.423924 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.656507 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cpvm8"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.658314 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.703487 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cpvm8"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.762857 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79pq5\" (UniqueName: \"kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.763009 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.763161 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xlb7n"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.765800 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.794153 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xlb7n"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.832676 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ac2a-account-create-update-4slxw"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.836319 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.840809 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.841664 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac2a-account-create-update-4slxw"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.865198 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79pq5\" (UniqueName: \"kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.865304 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.865528 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs62f\" (UniqueName: \"kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.865582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.866671 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.891547 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79pq5\" (UniqueName: \"kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5\") pod \"nova-api-db-create-cpvm8\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.915158 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.942800 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lbvw8"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.948766 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.972755 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs62f\" (UniqueName: \"kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.972827 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.972897 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.972970 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48n59\" (UniqueName: \"kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.974128 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.994719 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cf90-account-create-update-mgq2x"] Jan 23 13:53:00 crc kubenswrapper[4771]: I0123 13:53:00.996481 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.002978 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.003497 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.006543 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.017680 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs62f\" (UniqueName: \"kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f\") pod \"nova-cell0-db-create-xlb7n\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.071767 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lbvw8"] Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.074629 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.074799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48n59\" (UniqueName: \"kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.074927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnswd\" (UniqueName: \"kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.075034 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.077179 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.100902 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.107380 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf90-account-create-update-mgq2x"] Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.114499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48n59\" (UniqueName: \"kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59\") pod \"nova-api-ac2a-account-create-update-4slxw\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.135178 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerStarted","Data":"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564"} Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.135235 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerStarted","Data":"f205ad738b13bc38a4a3a3a6f3569fa7bcedc2cb625372e4a08e9401d34332d3"} Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.168936 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-772b-account-create-update-xq297"] Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.176927 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6qf\" (UniqueName: \"kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.177049 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnswd\" (UniqueName: \"kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.177103 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.177161 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.178794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d52cbbcd-9ccc-4f07-a407-15edb7bde07e","Type":"ContainerStarted","Data":"14a8d95566e2aec2c7561e70493d1392cb39c86f0d60278ecea8852ba886b74a"} Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.179395 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.180333 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.186892 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.192167 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.201792 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-772b-account-create-update-xq297"] Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.218908 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff136ca3-c0df-418b-b38f-9fed67d6ab21","Type":"ContainerStarted","Data":"c47851d4aa77c9e30bfc56594765576d44e29f41d7f121cce48e907d30cc92af"} Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.219909 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnswd\" (UniqueName: \"kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd\") pod \"nova-cell1-db-create-lbvw8\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.258963 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.02363356 podStartE2EDuration="34.258922914s" podCreationTimestamp="2026-01-23 13:52:27 +0000 UTC" firstStartedPulling="2026-01-23 13:52:28.082469475 +0000 UTC m=+1189.105007100" lastFinishedPulling="2026-01-23 13:53:00.317758829 +0000 UTC m=+1221.340296454" observedRunningTime="2026-01-23 13:53:01.249507106 +0000 UTC m=+1222.272044761" watchObservedRunningTime="2026-01-23 13:53:01.258922914 +0000 UTC m=+1222.281460539" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.282776 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6qf\" (UniqueName: \"kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.282904 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbk8v\" (UniqueName: \"kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.283632 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.283693 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.290508 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.302480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.341514 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6qf\" (UniqueName: \"kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf\") pod \"nova-cell0-cf90-account-create-update-mgq2x\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.396609 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbk8v\" (UniqueName: \"kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.459837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.466235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.515310 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbk8v\" (UniqueName: \"kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v\") pod \"nova-cell1-772b-account-create-update-xq297\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.623007 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.700736 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cpvm8"] Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.819196 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:01 crc kubenswrapper[4771]: I0123 13:53:01.964192 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xlb7n"] Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.160194 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac2a-account-create-update-4slxw"] Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.233262 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lbvw8"] Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.251586 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac2a-account-create-update-4slxw" event={"ID":"cb8f22a6-98e2-45a6-9589-77968163dd98","Type":"ContainerStarted","Data":"208f1c811c5f5ec6b65a50185dcba88cfad24d5c1694da58ecb476b4c93e5bd3"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.270008 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xlb7n" event={"ID":"4579b579-c870-402e-90ca-0d37db6e919d","Type":"ContainerStarted","Data":"ed7fdd1b3f922f4a513586774718cd304fc4e191c231781adacb5db6b7b574e3"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.297113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerStarted","Data":"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.318940 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff136ca3-c0df-418b-b38f-9fed67d6ab21","Type":"ContainerStarted","Data":"32f6ccfa0f6d7a220f73380a17caa77a970acaf56e1240d90bf70e7f757759ab"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.323268 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" containerID="cri-o://7dfd3ffe2bc11f3fbc6ea00e102c90f8f98b6ef5eefecf4ea870b9516452b295" gracePeriod=30 Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.323577 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cpvm8" event={"ID":"6a7835af-e3df-48e7-9db2-4c5fd0f75baf","Type":"ContainerStarted","Data":"39879edeb910e84f38b4564b201e6d0d3c8626ecc1d863729ecd1919da1cb16f"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.323602 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cpvm8" event={"ID":"6a7835af-e3df-48e7-9db2-4c5fd0f75baf","Type":"ContainerStarted","Data":"dfcee13b89a6413804e283b3fb34f608e604963082cc6041b607602d37e1abd7"} Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.377493 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.377446047 podStartE2EDuration="4.377446047s" podCreationTimestamp="2026-01-23 13:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:02.352754495 +0000 UTC m=+1223.375292140" watchObservedRunningTime="2026-01-23 13:53:02.377446047 +0000 UTC m=+1223.399983672" Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.416683 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cf90-account-create-update-mgq2x"] Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.430283 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-cpvm8" podStartSLOduration=2.4302553 podStartE2EDuration="2.4302553s" podCreationTimestamp="2026-01-23 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:02.385354808 +0000 UTC m=+1223.407892433" watchObservedRunningTime="2026-01-23 13:53:02.4302553 +0000 UTC m=+1223.452792925" Jan 23 13:53:02 crc kubenswrapper[4771]: W0123 13:53:02.441557 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0369c78d_76d3_4407_bdf5_07a6c326335f.slice/crio-427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5 WatchSource:0}: Error finding container 427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5: Status 404 returned error can't find the container with id 427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5 Jan 23 13:53:02 crc kubenswrapper[4771]: I0123 13:53:02.563731 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-772b-account-create-update-xq297"] Jan 23 13:53:02 crc kubenswrapper[4771]: W0123 13:53:02.601980 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf93e3306_16f5_4d49_88e0_0e5baef7912c.slice/crio-c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2 WatchSource:0}: Error finding container c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2: Status 404 returned error can't find the container with id c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.336811 4771 generic.go:334] "Generic (PLEG): container finished" podID="0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" containerID="9a8197b30c26491834bc5e42d074fbdef69462419a59eb81b01e83debc145687" exitCode=0 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.337221 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lbvw8" event={"ID":"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf","Type":"ContainerDied","Data":"9a8197b30c26491834bc5e42d074fbdef69462419a59eb81b01e83debc145687"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.337251 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lbvw8" event={"ID":"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf","Type":"ContainerStarted","Data":"bb21bc7b184684c828986f9fc0e676da3d42a1ce0868c07d91f50a7066936138"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.339146 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-772b-account-create-update-xq297" event={"ID":"f93e3306-16f5-4d49-88e0-0e5baef7912c","Type":"ContainerStarted","Data":"1b645f92c84dc6f4b22655ae1482cee38a23ec3b6ced2edf9b69dbacfdc05e37"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.339198 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-772b-account-create-update-xq297" event={"ID":"f93e3306-16f5-4d49-88e0-0e5baef7912c","Type":"ContainerStarted","Data":"c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.350053 4771 generic.go:334] "Generic (PLEG): container finished" podID="cb8f22a6-98e2-45a6-9589-77968163dd98" containerID="d0d6d8c46e45e94e71c2824a6b84445a4b55d6d55eb9fab5f77dda9601b4a41a" exitCode=0 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.350128 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac2a-account-create-update-4slxw" event={"ID":"cb8f22a6-98e2-45a6-9589-77968163dd98","Type":"ContainerDied","Data":"d0d6d8c46e45e94e71c2824a6b84445a4b55d6d55eb9fab5f77dda9601b4a41a"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.352670 4771 generic.go:334] "Generic (PLEG): container finished" podID="4579b579-c870-402e-90ca-0d37db6e919d" containerID="a0b70f3cfa59f2eb3cc93c4118bb2dceb008e9f96a7d16f688f3ea29473abfbd" exitCode=0 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.352721 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xlb7n" event={"ID":"4579b579-c870-402e-90ca-0d37db6e919d","Type":"ContainerDied","Data":"a0b70f3cfa59f2eb3cc93c4118bb2dceb008e9f96a7d16f688f3ea29473abfbd"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.362281 4771 generic.go:334] "Generic (PLEG): container finished" podID="0369c78d-76d3-4407-bdf5-07a6c326335f" containerID="91e885c62bc81bfb5e96fb8f2db85e70c90e01806346919076366e9e1af2333d" exitCode=0 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.362359 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" event={"ID":"0369c78d-76d3-4407-bdf5-07a6c326335f","Type":"ContainerDied","Data":"91e885c62bc81bfb5e96fb8f2db85e70c90e01806346919076366e9e1af2333d"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.362383 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" event={"ID":"0369c78d-76d3-4407-bdf5-07a6c326335f","Type":"ContainerStarted","Data":"427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.375763 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerStarted","Data":"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.379905 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a7835af-e3df-48e7-9db2-4c5fd0f75baf" containerID="39879edeb910e84f38b4564b201e6d0d3c8626ecc1d863729ecd1919da1cb16f" exitCode=0 Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.381100 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cpvm8" event={"ID":"6a7835af-e3df-48e7-9db2-4c5fd0f75baf","Type":"ContainerDied","Data":"39879edeb910e84f38b4564b201e6d0d3c8626ecc1d863729ecd1919da1cb16f"} Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.432234 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-772b-account-create-update-xq297" podStartSLOduration=2.43220679 podStartE2EDuration="2.43220679s" podCreationTimestamp="2026-01-23 13:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:03.396106367 +0000 UTC m=+1224.418644002" watchObservedRunningTime="2026-01-23 13:53:03.43220679 +0000 UTC m=+1224.454744405" Jan 23 13:53:03 crc kubenswrapper[4771]: I0123 13:53:03.652258 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.406786 4771 generic.go:334] "Generic (PLEG): container finished" podID="f93e3306-16f5-4d49-88e0-0e5baef7912c" containerID="1b645f92c84dc6f4b22655ae1482cee38a23ec3b6ced2edf9b69dbacfdc05e37" exitCode=0 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.406998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-772b-account-create-update-xq297" event={"ID":"f93e3306-16f5-4d49-88e0-0e5baef7912c","Type":"ContainerDied","Data":"1b645f92c84dc6f4b22655ae1482cee38a23ec3b6ced2edf9b69dbacfdc05e37"} Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.434805 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerStarted","Data":"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8"} Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.435121 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-central-agent" containerID="cri-o://fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564" gracePeriod=30 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.435594 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.436065 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="proxy-httpd" containerID="cri-o://4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8" gracePeriod=30 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.436132 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="sg-core" containerID="cri-o://e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f" gracePeriod=30 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.436190 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-notification-agent" containerID="cri-o://c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc" gracePeriod=30 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.462224 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerID="7dfd3ffe2bc11f3fbc6ea00e102c90f8f98b6ef5eefecf4ea870b9516452b295" exitCode=0 Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.462626 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerDied","Data":"7dfd3ffe2bc11f3fbc6ea00e102c90f8f98b6ef5eefecf4ea870b9516452b295"} Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.462706 4771 scope.go:117] "RemoveContainer" containerID="85a5ad26c08823bfb85c746b25cf368c8afd5e851279563feda5d289bbf7012c" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.483637 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.491547314 podStartE2EDuration="5.483612517s" podCreationTimestamp="2026-01-23 13:52:59 +0000 UTC" firstStartedPulling="2026-01-23 13:53:00.095361884 +0000 UTC m=+1221.117899509" lastFinishedPulling="2026-01-23 13:53:04.087427077 +0000 UTC m=+1225.109964712" observedRunningTime="2026-01-23 13:53:04.462455027 +0000 UTC m=+1225.484992652" watchObservedRunningTime="2026-01-23 13:53:04.483612517 +0000 UTC m=+1225.506150142" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.741957 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.926525 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs\") pod \"ebafbd30-6f52-4209-b962-c97da4d4f9da\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.926957 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs" (OuterVolumeSpecName: "logs") pod "ebafbd30-6f52-4209-b962-c97da4d4f9da" (UID: "ebafbd30-6f52-4209-b962-c97da4d4f9da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.927127 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle\") pod \"ebafbd30-6f52-4209-b962-c97da4d4f9da\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.927296 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca\") pod \"ebafbd30-6f52-4209-b962-c97da4d4f9da\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.927347 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sszvq\" (UniqueName: \"kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq\") pod \"ebafbd30-6f52-4209-b962-c97da4d4f9da\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.927387 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data\") pod \"ebafbd30-6f52-4209-b962-c97da4d4f9da\" (UID: \"ebafbd30-6f52-4209-b962-c97da4d4f9da\") " Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.927863 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebafbd30-6f52-4209-b962-c97da4d4f9da-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:04 crc kubenswrapper[4771]: I0123 13:53:04.945035 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq" (OuterVolumeSpecName: "kube-api-access-sszvq") pod "ebafbd30-6f52-4209-b962-c97da4d4f9da" (UID: "ebafbd30-6f52-4209-b962-c97da4d4f9da"). InnerVolumeSpecName "kube-api-access-sszvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.032702 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sszvq\" (UniqueName: \"kubernetes.io/projected/ebafbd30-6f52-4209-b962-c97da4d4f9da-kube-api-access-sszvq\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.034708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data" (OuterVolumeSpecName: "config-data") pod "ebafbd30-6f52-4209-b962-c97da4d4f9da" (UID: "ebafbd30-6f52-4209-b962-c97da4d4f9da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.038809 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.043961 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ebafbd30-6f52-4209-b962-c97da4d4f9da" (UID: "ebafbd30-6f52-4209-b962-c97da4d4f9da"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.076125 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebafbd30-6f52-4209-b962-c97da4d4f9da" (UID: "ebafbd30-6f52-4209-b962-c97da4d4f9da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.117054 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.135523 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48n59\" (UniqueName: \"kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59\") pod \"cb8f22a6-98e2-45a6-9589-77968163dd98\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.135713 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw6qf\" (UniqueName: \"kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf\") pod \"0369c78d-76d3-4407-bdf5-07a6c326335f\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.135800 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts\") pod \"0369c78d-76d3-4407-bdf5-07a6c326335f\" (UID: \"0369c78d-76d3-4407-bdf5-07a6c326335f\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.135859 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts\") pod \"cb8f22a6-98e2-45a6-9589-77968163dd98\" (UID: \"cb8f22a6-98e2-45a6-9589-77968163dd98\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.136733 4771 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.136752 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.136763 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebafbd30-6f52-4209-b962-c97da4d4f9da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.137761 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb8f22a6-98e2-45a6-9589-77968163dd98" (UID: "cb8f22a6-98e2-45a6-9589-77968163dd98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.148129 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0369c78d-76d3-4407-bdf5-07a6c326335f" (UID: "0369c78d-76d3-4407-bdf5-07a6c326335f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.149745 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf" (OuterVolumeSpecName: "kube-api-access-jw6qf") pod "0369c78d-76d3-4407-bdf5-07a6c326335f" (UID: "0369c78d-76d3-4407-bdf5-07a6c326335f"). InnerVolumeSpecName "kube-api-access-jw6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.162710 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59" (OuterVolumeSpecName: "kube-api-access-48n59") pod "cb8f22a6-98e2-45a6-9589-77968163dd98" (UID: "cb8f22a6-98e2-45a6-9589-77968163dd98"). InnerVolumeSpecName "kube-api-access-48n59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.171262 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.226836 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.246579 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48n59\" (UniqueName: \"kubernetes.io/projected/cb8f22a6-98e2-45a6-9589-77968163dd98-kube-api-access-48n59\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.246619 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw6qf\" (UniqueName: \"kubernetes.io/projected/0369c78d-76d3-4407-bdf5-07a6c326335f-kube-api-access-jw6qf\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.246631 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0369c78d-76d3-4407-bdf5-07a6c326335f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.246661 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb8f22a6-98e2-45a6-9589-77968163dd98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.258236 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.347902 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs62f\" (UniqueName: \"kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f\") pod \"4579b579-c870-402e-90ca-0d37db6e919d\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.348170 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnswd\" (UniqueName: \"kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd\") pod \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.348328 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts\") pod \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\" (UID: \"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.348368 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts\") pod \"4579b579-c870-402e-90ca-0d37db6e919d\" (UID: \"4579b579-c870-402e-90ca-0d37db6e919d\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.349628 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" (UID: "0d5e4f99-16c5-43fa-8606-e4b1656e2eaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.350127 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4579b579-c870-402e-90ca-0d37db6e919d" (UID: "4579b579-c870-402e-90ca-0d37db6e919d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.354717 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f" (OuterVolumeSpecName: "kube-api-access-qs62f") pod "4579b579-c870-402e-90ca-0d37db6e919d" (UID: "4579b579-c870-402e-90ca-0d37db6e919d"). InnerVolumeSpecName "kube-api-access-qs62f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.357019 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd" (OuterVolumeSpecName: "kube-api-access-vnswd") pod "0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" (UID: "0d5e4f99-16c5-43fa-8606-e4b1656e2eaf"). InnerVolumeSpecName "kube-api-access-vnswd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.450437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts\") pod \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.450840 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79pq5\" (UniqueName: \"kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5\") pod \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\" (UID: \"6a7835af-e3df-48e7-9db2-4c5fd0f75baf\") " Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.451337 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnswd\" (UniqueName: \"kubernetes.io/projected/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-kube-api-access-vnswd\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.451353 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.451364 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4579b579-c870-402e-90ca-0d37db6e919d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.451375 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs62f\" (UniqueName: \"kubernetes.io/projected/4579b579-c870-402e-90ca-0d37db6e919d-kube-api-access-qs62f\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.451717 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a7835af-e3df-48e7-9db2-4c5fd0f75baf" (UID: "6a7835af-e3df-48e7-9db2-4c5fd0f75baf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.456104 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5" (OuterVolumeSpecName: "kube-api-access-79pq5") pod "6a7835af-e3df-48e7-9db2-4c5fd0f75baf" (UID: "6a7835af-e3df-48e7-9db2-4c5fd0f75baf"). InnerVolumeSpecName "kube-api-access-79pq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.481718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xlb7n" event={"ID":"4579b579-c870-402e-90ca-0d37db6e919d","Type":"ContainerDied","Data":"ed7fdd1b3f922f4a513586774718cd304fc4e191c231781adacb5db6b7b574e3"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.481789 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed7fdd1b3f922f4a513586774718cd304fc4e191c231781adacb5db6b7b574e3" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.481879 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xlb7n" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.486885 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" event={"ID":"0369c78d-76d3-4407-bdf5-07a6c326335f","Type":"ContainerDied","Data":"427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.486949 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="427b94fad39f92fcffd27561ee94a1456e8c46bf28e4ba4101dbd061cd6c99f5" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.487036 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cf90-account-create-update-mgq2x" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.491206 4771 generic.go:334] "Generic (PLEG): container finished" podID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerID="e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f" exitCode=2 Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.491244 4771 generic.go:334] "Generic (PLEG): container finished" podID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerID="c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc" exitCode=0 Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.491298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerDied","Data":"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.491333 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerDied","Data":"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.497239 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cpvm8" event={"ID":"6a7835af-e3df-48e7-9db2-4c5fd0f75baf","Type":"ContainerDied","Data":"dfcee13b89a6413804e283b3fb34f608e604963082cc6041b607602d37e1abd7"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.497273 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcee13b89a6413804e283b3fb34f608e604963082cc6041b607602d37e1abd7" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.497358 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cpvm8" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.502789 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lbvw8" event={"ID":"0d5e4f99-16c5-43fa-8606-e4b1656e2eaf","Type":"ContainerDied","Data":"bb21bc7b184684c828986f9fc0e676da3d42a1ce0868c07d91f50a7066936138"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.502825 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb21bc7b184684c828986f9fc0e676da3d42a1ce0868c07d91f50a7066936138" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.502888 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lbvw8" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.512702 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ebafbd30-6f52-4209-b962-c97da4d4f9da","Type":"ContainerDied","Data":"0fff1bc76cda7afdb87b26a2a56617707fa1eee249a18f8a6561ae5dcac73214"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.512764 4771 scope.go:117] "RemoveContainer" containerID="7dfd3ffe2bc11f3fbc6ea00e102c90f8f98b6ef5eefecf4ea870b9516452b295" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.512896 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.517880 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac2a-account-create-update-4slxw" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.519452 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac2a-account-create-update-4slxw" event={"ID":"cb8f22a6-98e2-45a6-9589-77968163dd98","Type":"ContainerDied","Data":"208f1c811c5f5ec6b65a50185dcba88cfad24d5c1694da58ecb476b4c93e5bd3"} Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.519493 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208f1c811c5f5ec6b65a50185dcba88cfad24d5c1694da58ecb476b4c93e5bd3" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.553242 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79pq5\" (UniqueName: \"kubernetes.io/projected/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-kube-api-access-79pq5\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.553280 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a7835af-e3df-48e7-9db2-4c5fd0f75baf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.681248 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.727224 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.763521 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765287 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765329 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765353 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765362 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765378 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8f22a6-98e2-45a6-9589-77968163dd98" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765386 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8f22a6-98e2-45a6-9589-77968163dd98" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765422 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765435 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765450 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4579b579-c870-402e-90ca-0d37db6e919d" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765459 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4579b579-c870-402e-90ca-0d37db6e919d" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765505 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7835af-e3df-48e7-9db2-4c5fd0f75baf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765516 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7835af-e3df-48e7-9db2-4c5fd0f75baf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765532 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765541 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: E0123 13:53:05.765553 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0369c78d-76d3-4407-bdf5-07a6c326335f" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765561 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0369c78d-76d3-4407-bdf5-07a6c326335f" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765865 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765893 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0369c78d-76d3-4407-bdf5-07a6c326335f" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765906 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765918 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765964 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4579b579-c870-402e-90ca-0d37db6e919d" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765978 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.765990 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7835af-e3df-48e7-9db2-4c5fd0f75baf" containerName="mariadb-database-create" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.766002 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.766020 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8f22a6-98e2-45a6-9589-77968163dd98" containerName="mariadb-account-create-update" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.768138 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.773822 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.816359 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.864260 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.864573 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrl7\" (UniqueName: \"kubernetes.io/projected/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-kube-api-access-xxrl7\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.864647 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.864676 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-logs\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.864707 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.960819 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.966748 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxrl7\" (UniqueName: \"kubernetes.io/projected/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-kube-api-access-xxrl7\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.966848 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.966887 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-logs\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.966921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.966995 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.967576 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-logs\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.974284 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.974831 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.975859 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-config-data\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:05 crc kubenswrapper[4771]: I0123 13:53:05.995107 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxrl7\" (UniqueName: \"kubernetes.io/projected/b05a43a3-db12-4944-a8d7-e2f6c27b48f3-kube-api-access-xxrl7\") pod \"watcher-decision-engine-0\" (UID: \"b05a43a3-db12-4944-a8d7-e2f6c27b48f3\") " pod="openstack/watcher-decision-engine-0" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.067988 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbk8v\" (UniqueName: \"kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v\") pod \"f93e3306-16f5-4d49-88e0-0e5baef7912c\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.068676 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts\") pod \"f93e3306-16f5-4d49-88e0-0e5baef7912c\" (UID: \"f93e3306-16f5-4d49-88e0-0e5baef7912c\") " Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.069161 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f93e3306-16f5-4d49-88e0-0e5baef7912c" (UID: "f93e3306-16f5-4d49-88e0-0e5baef7912c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.069530 4771 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f93e3306-16f5-4d49-88e0-0e5baef7912c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.075452 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v" (OuterVolumeSpecName: "kube-api-access-vbk8v") pod "f93e3306-16f5-4d49-88e0-0e5baef7912c" (UID: "f93e3306-16f5-4d49-88e0-0e5baef7912c"). InnerVolumeSpecName "kube-api-access-vbk8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.173458 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbk8v\" (UniqueName: \"kubernetes.io/projected/f93e3306-16f5-4d49-88e0-0e5baef7912c-kube-api-access-vbk8v\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.256891 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.568752 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-772b-account-create-update-xq297" Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.570106 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-772b-account-create-update-xq297" event={"ID":"f93e3306-16f5-4d49-88e0-0e5baef7912c","Type":"ContainerDied","Data":"c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2"} Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.570179 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64315a698dca951d618b2110906fd5700c3c00bf78b7e1f4d8fb761bfd0aea2" Jan 23 13:53:06 crc kubenswrapper[4771]: W0123 13:53:06.802083 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05a43a3_db12_4944_a8d7_e2f6c27b48f3.slice/crio-eec48e5b325a08ad0161ed66c58fe8496834b252b11acc9030e2144eb05c02ed WatchSource:0}: Error finding container eec48e5b325a08ad0161ed66c58fe8496834b252b11acc9030e2144eb05c02ed: Status 404 returned error can't find the container with id eec48e5b325a08ad0161ed66c58fe8496834b252b11acc9030e2144eb05c02ed Jan 23 13:53:06 crc kubenswrapper[4771]: I0123 13:53:06.803581 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 23 13:53:07 crc kubenswrapper[4771]: I0123 13:53:07.017928 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 13:53:07 crc kubenswrapper[4771]: I0123 13:53:07.240197 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" path="/var/lib/kubelet/pods/ebafbd30-6f52-4209-b962-c97da4d4f9da/volumes" Jan 23 13:53:07 crc kubenswrapper[4771]: I0123 13:53:07.580978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b05a43a3-db12-4944-a8d7-e2f6c27b48f3","Type":"ContainerStarted","Data":"60947660c8757e7e89fb54702b07402f895c3032432e110dca2083d45f8eb331"} Jan 23 13:53:07 crc kubenswrapper[4771]: I0123 13:53:07.581063 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"b05a43a3-db12-4944-a8d7-e2f6c27b48f3","Type":"ContainerStarted","Data":"eec48e5b325a08ad0161ed66c58fe8496834b252b11acc9030e2144eb05c02ed"} Jan 23 13:53:07 crc kubenswrapper[4771]: I0123 13:53:07.612785 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.612753824 podStartE2EDuration="2.612753824s" podCreationTimestamp="2026-01-23 13:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:07.608448257 +0000 UTC m=+1228.630985912" watchObservedRunningTime="2026-01-23 13:53:07.612753824 +0000 UTC m=+1228.635291459" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.406747 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.407112 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.474005 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.475067 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.591041 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 13:53:08 crc kubenswrapper[4771]: I0123 13:53:08.592203 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 13:53:10 crc kubenswrapper[4771]: I0123 13:53:10.613064 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:53:10 crc kubenswrapper[4771]: I0123 13:53:10.614611 4771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 13:53:10 crc kubenswrapper[4771]: I0123 13:53:10.687481 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 13:53:10 crc kubenswrapper[4771]: I0123 13:53:10.688030 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.140634 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2bw75"] Jan 23 13:53:11 crc kubenswrapper[4771]: E0123 13:53:11.141205 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93e3306-16f5-4d49-88e0-0e5baef7912c" containerName="mariadb-account-create-update" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.141227 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93e3306-16f5-4d49-88e0-0e5baef7912c" containerName="mariadb-account-create-update" Jan 23 13:53:11 crc kubenswrapper[4771]: E0123 13:53:11.141247 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.141254 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebafbd30-6f52-4209-b962-c97da4d4f9da" containerName="watcher-decision-engine" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.141499 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93e3306-16f5-4d49-88e0-0e5baef7912c" containerName="mariadb-account-create-update" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.142331 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.145162 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5nz6m" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.145490 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.159606 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.191013 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2bw75"] Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.202884 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.202960 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w747t\" (UniqueName: \"kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.203169 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.203547 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.305398 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.305870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.305974 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.306020 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w747t\" (UniqueName: \"kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.314155 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.316055 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.325866 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.329126 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w747t\" (UniqueName: \"kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t\") pod \"nova-cell0-conductor-db-sync-2bw75\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:11 crc kubenswrapper[4771]: I0123 13:53:11.551363 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:12 crc kubenswrapper[4771]: I0123 13:53:12.160211 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2bw75"] Jan 23 13:53:12 crc kubenswrapper[4771]: I0123 13:53:12.660854 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2bw75" event={"ID":"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe","Type":"ContainerStarted","Data":"1e011e9f3a616c45e5e58eb4c13f41f5a8615648d183a980478108d59ae9dcac"} Jan 23 13:53:12 crc kubenswrapper[4771]: I0123 13:53:12.663512 4771 generic.go:334] "Generic (PLEG): container finished" podID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerID="fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564" exitCode=0 Jan 23 13:53:12 crc kubenswrapper[4771]: I0123 13:53:12.663893 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerDied","Data":"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564"} Jan 23 13:53:16 crc kubenswrapper[4771]: I0123 13:53:16.258432 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:16 crc kubenswrapper[4771]: I0123 13:53:16.318030 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:16 crc kubenswrapper[4771]: I0123 13:53:16.721311 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:16 crc kubenswrapper[4771]: I0123 13:53:16.768426 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 23 13:53:23 crc kubenswrapper[4771]: I0123 13:53:23.852593 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2bw75" event={"ID":"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe","Type":"ContainerStarted","Data":"e94f7c254f5b712c8d84cebdab80b44c269c41165653b98362902c9f9a26a346"} Jan 23 13:53:23 crc kubenswrapper[4771]: I0123 13:53:23.885182 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2bw75" podStartSLOduration=2.223612041 podStartE2EDuration="12.885152897s" podCreationTimestamp="2026-01-23 13:53:11 +0000 UTC" firstStartedPulling="2026-01-23 13:53:12.165049884 +0000 UTC m=+1233.187587509" lastFinishedPulling="2026-01-23 13:53:22.82659074 +0000 UTC m=+1243.849128365" observedRunningTime="2026-01-23 13:53:23.876657258 +0000 UTC m=+1244.899194883" watchObservedRunningTime="2026-01-23 13:53:23.885152897 +0000 UTC m=+1244.907690532" Jan 23 13:53:29 crc kubenswrapper[4771]: I0123 13:53:29.453572 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.958752 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.995571 4771 generic.go:334] "Generic (PLEG): container finished" podID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerID="4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8" exitCode=137 Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.995682 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerDied","Data":"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8"} Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.995736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9","Type":"ContainerDied","Data":"f205ad738b13bc38a4a3a3a6f3569fa7bcedc2cb625372e4a08e9401d34332d3"} Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.995765 4771 scope.go:117] "RemoveContainer" containerID="4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8" Jan 23 13:53:34 crc kubenswrapper[4771]: I0123 13:53:34.996049 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.025592 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.025925 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.025962 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.026167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.026291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.026324 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.026343 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmp8\" (UniqueName: \"kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8\") pod \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\" (UID: \"cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9\") " Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.029436 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.029766 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.030208 4771 scope.go:117] "RemoveContainer" containerID="e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.039124 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8" (OuterVolumeSpecName: "kube-api-access-fwmp8") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "kube-api-access-fwmp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.041969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts" (OuterVolumeSpecName: "scripts") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.066623 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.129716 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.129773 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwmp8\" (UniqueName: \"kubernetes.io/projected/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-kube-api-access-fwmp8\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.129789 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.129804 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.129813 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.140037 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.160536 4771 scope.go:117] "RemoveContainer" containerID="c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.164090 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data" (OuterVolumeSpecName: "config-data") pod "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" (UID: "cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.189168 4771 scope.go:117] "RemoveContainer" containerID="fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.218240 4771 scope.go:117] "RemoveContainer" containerID="4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.224338 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8\": container with ID starting with 4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8 not found: ID does not exist" containerID="4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.224476 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8"} err="failed to get container status \"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8\": rpc error: code = NotFound desc = could not find container \"4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8\": container with ID starting with 4a8c3ca8bbe0879202cf6595130c6659f11cde4bb1f78207de2aa8dc39a324f8 not found: ID does not exist" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.224520 4771 scope.go:117] "RemoveContainer" containerID="e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.225116 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f\": container with ID starting with e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f not found: ID does not exist" containerID="e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.225134 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f"} err="failed to get container status \"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f\": rpc error: code = NotFound desc = could not find container \"e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f\": container with ID starting with e4cc7a72e61144e86cb8bf59feac6ab921281572bd2db21a79cbe1f672db1c7f not found: ID does not exist" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.225147 4771 scope.go:117] "RemoveContainer" containerID="c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.225833 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc\": container with ID starting with c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc not found: ID does not exist" containerID="c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.225852 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc"} err="failed to get container status \"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc\": rpc error: code = NotFound desc = could not find container \"c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc\": container with ID starting with c15502061337f8ccf6a14f0fdb2baa91f906b2af5dc364051550b4d1bf0c05dc not found: ID does not exist" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.225867 4771 scope.go:117] "RemoveContainer" containerID="fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.227963 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564\": container with ID starting with fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564 not found: ID does not exist" containerID="fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.228055 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564"} err="failed to get container status \"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564\": rpc error: code = NotFound desc = could not find container \"fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564\": container with ID starting with fcfe7e1f20031ce2975f18667a9f49fc359d9c30f1083c39bb66f87d1f5a8564 not found: ID does not exist" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.232094 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.232148 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.326523 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.338270 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.363740 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.364514 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-central-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364544 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-central-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.364563 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="proxy-httpd" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364571 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="proxy-httpd" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.364591 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-notification-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364602 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-notification-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: E0123 13:53:35.364620 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="sg-core" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364628 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="sg-core" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364917 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="sg-core" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364950 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-central-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364971 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="proxy-httpd" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.364987 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" containerName="ceilometer-notification-agent" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.367660 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.370300 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.372491 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.379316 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.436725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.436813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.436999 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.437101 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.437239 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.437376 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.437581 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7n9k\" (UniqueName: \"kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.540446 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7n9k\" (UniqueName: \"kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.541958 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.542230 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.542311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.542379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.542590 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.542891 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.543626 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.543783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.548554 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.549279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.550704 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.550712 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.561313 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7n9k\" (UniqueName: \"kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k\") pod \"ceilometer-0\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " pod="openstack/ceilometer-0" Jan 23 13:53:35 crc kubenswrapper[4771]: I0123 13:53:35.696522 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:53:36 crc kubenswrapper[4771]: I0123 13:53:36.227351 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:53:37 crc kubenswrapper[4771]: I0123 13:53:37.045421 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerStarted","Data":"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72"} Jan 23 13:53:37 crc kubenswrapper[4771]: I0123 13:53:37.046157 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerStarted","Data":"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7"} Jan 23 13:53:37 crc kubenswrapper[4771]: I0123 13:53:37.046172 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerStarted","Data":"38c23fa1ad827fd03ffe043ae62d13fc5dc04fffa59ce02da29cdfa6cf96ff1e"} Jan 23 13:53:37 crc kubenswrapper[4771]: I0123 13:53:37.245333 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9" path="/var/lib/kubelet/pods/cc99c1e3-2a66-4ab2-9b34-fabc7b512bf9/volumes" Jan 23 13:53:39 crc kubenswrapper[4771]: I0123 13:53:39.076584 4771 generic.go:334] "Generic (PLEG): container finished" podID="c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" containerID="e94f7c254f5b712c8d84cebdab80b44c269c41165653b98362902c9f9a26a346" exitCode=0 Jan 23 13:53:39 crc kubenswrapper[4771]: I0123 13:53:39.076743 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2bw75" event={"ID":"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe","Type":"ContainerDied","Data":"e94f7c254f5b712c8d84cebdab80b44c269c41165653b98362902c9f9a26a346"} Jan 23 13:53:39 crc kubenswrapper[4771]: I0123 13:53:39.081384 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerStarted","Data":"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086"} Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.630401 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.665022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle\") pod \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.665173 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w747t\" (UniqueName: \"kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t\") pod \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.665288 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts\") pod \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.665859 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data\") pod \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\" (UID: \"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe\") " Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.670791 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts" (OuterVolumeSpecName: "scripts") pod "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" (UID: "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.671426 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t" (OuterVolumeSpecName: "kube-api-access-w747t") pod "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" (UID: "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe"). InnerVolumeSpecName "kube-api-access-w747t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.707186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data" (OuterVolumeSpecName: "config-data") pod "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" (UID: "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.707700 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" (UID: "c7e875d7-8f49-4d0d-a51e-3e0c5071bafe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.769094 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.769351 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.769487 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w747t\" (UniqueName: \"kubernetes.io/projected/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-kube-api-access-w747t\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:40 crc kubenswrapper[4771]: I0123 13:53:40.769590 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.109006 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2bw75" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.108998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2bw75" event={"ID":"c7e875d7-8f49-4d0d-a51e-3e0c5071bafe","Type":"ContainerDied","Data":"1e011e9f3a616c45e5e58eb4c13f41f5a8615648d183a980478108d59ae9dcac"} Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.110085 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e011e9f3a616c45e5e58eb4c13f41f5a8615648d183a980478108d59ae9dcac" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.150360 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerStarted","Data":"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13"} Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.151905 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.187484 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9601741019999999 podStartE2EDuration="6.187461059s" podCreationTimestamp="2026-01-23 13:53:35 +0000 UTC" firstStartedPulling="2026-01-23 13:53:36.232427263 +0000 UTC m=+1257.254964888" lastFinishedPulling="2026-01-23 13:53:40.45971422 +0000 UTC m=+1261.482251845" observedRunningTime="2026-01-23 13:53:41.176298046 +0000 UTC m=+1262.198835671" watchObservedRunningTime="2026-01-23 13:53:41.187461059 +0000 UTC m=+1262.209998684" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.250481 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 13:53:41 crc kubenswrapper[4771]: E0123 13:53:41.250946 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" containerName="nova-cell0-conductor-db-sync" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.250967 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" containerName="nova-cell0-conductor-db-sync" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.251422 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" containerName="nova-cell0-conductor-db-sync" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.252296 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.256771 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5nz6m" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.269846 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.270801 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.283379 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.283573 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blv5\" (UniqueName: \"kubernetes.io/projected/aae7cd57-ed9a-4e28-bea0-c1240a462e64-kube-api-access-7blv5\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.283650 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.386221 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.386302 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blv5\" (UniqueName: \"kubernetes.io/projected/aae7cd57-ed9a-4e28-bea0-c1240a462e64-kube-api-access-7blv5\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.386353 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.390982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.392373 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae7cd57-ed9a-4e28-bea0-c1240a462e64-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.403757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blv5\" (UniqueName: \"kubernetes.io/projected/aae7cd57-ed9a-4e28-bea0-c1240a462e64-kube-api-access-7blv5\") pod \"nova-cell0-conductor-0\" (UID: \"aae7cd57-ed9a-4e28-bea0-c1240a462e64\") " pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:41 crc kubenswrapper[4771]: I0123 13:53:41.577913 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:42 crc kubenswrapper[4771]: I0123 13:53:42.092883 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 13:53:42 crc kubenswrapper[4771]: I0123 13:53:42.166959 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aae7cd57-ed9a-4e28-bea0-c1240a462e64","Type":"ContainerStarted","Data":"13cfe82792c63ef4d1a2b38c1c1ea1951edaa3f7b35b6c7702c5ee29502dcf7c"} Jan 23 13:53:43 crc kubenswrapper[4771]: I0123 13:53:43.178611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aae7cd57-ed9a-4e28-bea0-c1240a462e64","Type":"ContainerStarted","Data":"ad88b7a4f2d411c18125a800e5584754ee1cd4adf2f4696078f2ee61c33a4c54"} Jan 23 13:53:43 crc kubenswrapper[4771]: I0123 13:53:43.179059 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:43 crc kubenswrapper[4771]: I0123 13:53:43.202650 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.202629324 podStartE2EDuration="2.202629324s" podCreationTimestamp="2026-01-23 13:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:43.19427895 +0000 UTC m=+1264.216816575" watchObservedRunningTime="2026-01-23 13:53:43.202629324 +0000 UTC m=+1264.225166949" Jan 23 13:53:51 crc kubenswrapper[4771]: I0123 13:53:51.654202 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.137328 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zchgr"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.140058 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.142949 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.143198 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.150375 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zchgr"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.269501 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.269662 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.269694 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzvx\" (UniqueName: \"kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.269814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.373595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.373744 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.373790 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzvx\" (UniqueName: \"kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.373890 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.392549 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.429110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzvx\" (UniqueName: \"kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.430485 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.460138 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data\") pod \"nova-cell0-cell-mapping-zchgr\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.460556 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.479023 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.486797 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.487960 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.533520 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.552481 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.554258 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.562175 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.615431 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.615651 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616030 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ksc\" (UniqueName: \"kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616196 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bqf\" (UniqueName: \"kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616497 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616696 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.616819 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.685661 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.688257 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.696740 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719345 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719393 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719496 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719579 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719632 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ksc\" (UniqueName: \"kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719679 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7bqf\" (UniqueName: \"kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.719737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.720322 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.751562 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.767705 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.777789 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.779793 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.800516 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.829505 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.831869 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.833990 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ksc\" (UniqueName: \"kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc\") pod \"nova-api-0\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " pod="openstack/nova-api-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.834916 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.848543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7bqf\" (UniqueName: \"kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf\") pod \"nova-scheduler-0\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " pod="openstack/nova-scheduler-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.835846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9ht\" (UniqueName: \"kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.848755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.848813 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.848991 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.858254 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.907568 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.909921 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.919018 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.952202 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9ht\" (UniqueName: \"kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.952747 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j72h\" (UniqueName: \"kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.952800 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.952843 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.954550 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.954799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.955350 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.959366 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.959660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.962503 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:52 crc kubenswrapper[4771]: I0123 13:53:52.981379 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9ht\" (UniqueName: \"kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht\") pod \"nova-metadata-0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " pod="openstack/nova-metadata-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.043578 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.057991 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk2j\" (UniqueName: \"kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.058080 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j72h\" (UniqueName: \"kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.058145 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.058163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.058210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.058280 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.059041 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.059126 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.059344 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.070449 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.073788 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.077300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j72h\" (UniqueName: \"kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h\") pod \"nova-cell1-novncproxy-0\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.085232 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.152396 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162377 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162434 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162477 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rk2j\" (UniqueName: \"kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162518 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.162551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.163481 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.163489 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.164208 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.164260 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.164354 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.173001 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.194083 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rk2j\" (UniqueName: \"kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j\") pod \"dnsmasq-dns-7c69974895-gdz7g\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.243162 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.261071 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zchgr"] Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.314381 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zchgr" event={"ID":"ebb9ca3b-06d1-428d-a140-b946a9ef5931","Type":"ContainerStarted","Data":"ec572735ade6be7ce526cdcd850f3b00ad35cdb035d6ea1f9ff02d9d090a5553"} Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.664482 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lwjw4"] Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.668597 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.673074 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.673381 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.682399 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lwjw4"] Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.739321 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.820398 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.821868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqh7p\" (UniqueName: \"kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.822215 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.822322 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.924703 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqh7p\" (UniqueName: \"kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.924793 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.924826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.924938 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.937141 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.938750 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.939804 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.955483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqh7p\" (UniqueName: \"kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p\") pod \"nova-cell1-conductor-db-sync-lwjw4\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:53 crc kubenswrapper[4771]: I0123 13:53:53.986883 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.002268 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.035870 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.238319 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.285134 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.328554 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" event={"ID":"968095d0-4c3b-4224-837b-b7a36dfb530a","Type":"ContainerStarted","Data":"d25a375ef8381c448ed05f84be6ced4fc6ca1e031a00d4b26e699a5607ee8975"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.333528 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zchgr" event={"ID":"ebb9ca3b-06d1-428d-a140-b946a9ef5931","Type":"ContainerStarted","Data":"82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.347444 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d33af987-32a1-48ce-878c-534c0d3801aa","Type":"ContainerStarted","Data":"1079360b25070395ce302ba7303beada174c4303f8223e9e7db61e88b58ef87c"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.358862 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zchgr" podStartSLOduration=2.358836744 podStartE2EDuration="2.358836744s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:54.355951802 +0000 UTC m=+1275.378489437" watchObservedRunningTime="2026-01-23 13:53:54.358836744 +0000 UTC m=+1275.381374369" Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.359695 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerStarted","Data":"2d71f5c19d8e28365dc4abb621829fee64561c976b0d326f17c8655aa78e2807"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.363197 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerStarted","Data":"eeca1b3f1fb7dca4413ba209dee622db370d1a7895920cd9fd061fef9071beeb"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.367226 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93f8c133-f857-4fff-87e1-fd9b79e946eb","Type":"ContainerStarted","Data":"3272db8b9b1b47c4ad8733d366ce509e4701c174530584aedf9fb68de8a22095"} Jan 23 13:53:54 crc kubenswrapper[4771]: I0123 13:53:54.579692 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lwjw4"] Jan 23 13:53:55 crc kubenswrapper[4771]: I0123 13:53:55.386878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" event={"ID":"3bdfda9a-c75f-412b-81f4-b33bb47d9435","Type":"ContainerStarted","Data":"2aad550a44cab96e05de1c3e5f32f531e3aefb34835ef16ffd27b4740199bef3"} Jan 23 13:53:55 crc kubenswrapper[4771]: I0123 13:53:55.387740 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" event={"ID":"3bdfda9a-c75f-412b-81f4-b33bb47d9435","Type":"ContainerStarted","Data":"b1cb769a6e4c22ee9bce4eba2f00b8b793ae250674cf5aa35a5ed6923c95d00f"} Jan 23 13:53:55 crc kubenswrapper[4771]: I0123 13:53:55.396386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" event={"ID":"968095d0-4c3b-4224-837b-b7a36dfb530a","Type":"ContainerDied","Data":"382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb"} Jan 23 13:53:55 crc kubenswrapper[4771]: I0123 13:53:55.395015 4771 generic.go:334] "Generic (PLEG): container finished" podID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerID="382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb" exitCode=0 Jan 23 13:53:55 crc kubenswrapper[4771]: I0123 13:53:55.414991 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" podStartSLOduration=2.414967214 podStartE2EDuration="2.414967214s" podCreationTimestamp="2026-01-23 13:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:55.411782833 +0000 UTC m=+1276.434320448" watchObservedRunningTime="2026-01-23 13:53:55.414967214 +0000 UTC m=+1276.437504829" Jan 23 13:53:56 crc kubenswrapper[4771]: I0123 13:53:56.224507 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:53:56 crc kubenswrapper[4771]: I0123 13:53:56.238501 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.452269 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93f8c133-f857-4fff-87e1-fd9b79e946eb","Type":"ContainerStarted","Data":"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9"} Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.460049 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" event={"ID":"968095d0-4c3b-4224-837b-b7a36dfb530a","Type":"ContainerStarted","Data":"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d"} Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.460295 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.463386 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d33af987-32a1-48ce-878c-534c0d3801aa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480" gracePeriod=30 Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.463688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d33af987-32a1-48ce-878c-534c0d3801aa","Type":"ContainerStarted","Data":"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480"} Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.468783 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerStarted","Data":"3f682d6e6871ac5735e39302bc8de0f7d8f6278d282874386f6c84485a8418d6"} Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.475683 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerStarted","Data":"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76"} Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.488831 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.715118978 podStartE2EDuration="6.488799739s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="2026-01-23 13:53:54.057461589 +0000 UTC m=+1275.079999214" lastFinishedPulling="2026-01-23 13:53:57.83114235 +0000 UTC m=+1278.853679975" observedRunningTime="2026-01-23 13:53:58.472485132 +0000 UTC m=+1279.495022757" watchObservedRunningTime="2026-01-23 13:53:58.488799739 +0000 UTC m=+1279.511337364" Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.509232 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.922887459 podStartE2EDuration="6.509208436s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="2026-01-23 13:53:54.253223669 +0000 UTC m=+1275.275761294" lastFinishedPulling="2026-01-23 13:53:57.839544646 +0000 UTC m=+1278.862082271" observedRunningTime="2026-01-23 13:53:58.496072559 +0000 UTC m=+1279.518610214" watchObservedRunningTime="2026-01-23 13:53:58.509208436 +0000 UTC m=+1279.531746061" Jan 23 13:53:58 crc kubenswrapper[4771]: I0123 13:53:58.528801 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" podStartSLOduration=6.528772885 podStartE2EDuration="6.528772885s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:53:58.51852686 +0000 UTC m=+1279.541064505" watchObservedRunningTime="2026-01-23 13:53:58.528772885 +0000 UTC m=+1279.551310520" Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.492250 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerStarted","Data":"880a7cf5997d3e42cbd816d1cc72fd6bafe906812575337055044fd73e6bf75e"} Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.494711 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerStarted","Data":"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035"} Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.494843 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-log" containerID="cri-o://4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" gracePeriod=30 Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.495095 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-metadata" containerID="cri-o://94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" gracePeriod=30 Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.529304 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.433108438 podStartE2EDuration="7.529269363s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="2026-01-23 13:53:53.751091585 +0000 UTC m=+1274.773629210" lastFinishedPulling="2026-01-23 13:53:57.84725248 +0000 UTC m=+1278.869790135" observedRunningTime="2026-01-23 13:53:59.523716927 +0000 UTC m=+1280.546254552" watchObservedRunningTime="2026-01-23 13:53:59.529269363 +0000 UTC m=+1280.551806988" Jan 23 13:53:59 crc kubenswrapper[4771]: I0123 13:53:59.551092 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.765773044 podStartE2EDuration="7.551061743s" podCreationTimestamp="2026-01-23 13:53:52 +0000 UTC" firstStartedPulling="2026-01-23 13:53:54.056155047 +0000 UTC m=+1275.078692672" lastFinishedPulling="2026-01-23 13:53:57.841443746 +0000 UTC m=+1278.863981371" observedRunningTime="2026-01-23 13:53:59.5452775 +0000 UTC m=+1280.567815125" watchObservedRunningTime="2026-01-23 13:53:59.551061743 +0000 UTC m=+1280.573599368" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.187927 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.235060 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs\") pod \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.235216 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p9ht\" (UniqueName: \"kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht\") pod \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.235253 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle\") pod \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.235519 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs" (OuterVolumeSpecName: "logs") pod "4f7cdf71-37b4-44e6-884d-7617c8f804c0" (UID: "4f7cdf71-37b4-44e6-884d-7617c8f804c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.235594 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data\") pod \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\" (UID: \"4f7cdf71-37b4-44e6-884d-7617c8f804c0\") " Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.236100 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f7cdf71-37b4-44e6-884d-7617c8f804c0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.242541 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht" (OuterVolumeSpecName: "kube-api-access-2p9ht") pod "4f7cdf71-37b4-44e6-884d-7617c8f804c0" (UID: "4f7cdf71-37b4-44e6-884d-7617c8f804c0"). InnerVolumeSpecName "kube-api-access-2p9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.284247 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f7cdf71-37b4-44e6-884d-7617c8f804c0" (UID: "4f7cdf71-37b4-44e6-884d-7617c8f804c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.286232 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data" (OuterVolumeSpecName: "config-data") pod "4f7cdf71-37b4-44e6-884d-7617c8f804c0" (UID: "4f7cdf71-37b4-44e6-884d-7617c8f804c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.312233 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.312312 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.338441 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.338479 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p9ht\" (UniqueName: \"kubernetes.io/projected/4f7cdf71-37b4-44e6-884d-7617c8f804c0-kube-api-access-2p9ht\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.338495 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f7cdf71-37b4-44e6-884d-7617c8f804c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.506517 4771 generic.go:334] "Generic (PLEG): container finished" podID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerID="94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" exitCode=0 Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.506565 4771 generic.go:334] "Generic (PLEG): container finished" podID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerID="4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" exitCode=143 Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.506964 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.507902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerDied","Data":"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035"} Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.507938 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerDied","Data":"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76"} Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.507950 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4f7cdf71-37b4-44e6-884d-7617c8f804c0","Type":"ContainerDied","Data":"eeca1b3f1fb7dca4413ba209dee622db370d1a7895920cd9fd061fef9071beeb"} Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.507973 4771 scope.go:117] "RemoveContainer" containerID="94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.560031 4771 scope.go:117] "RemoveContainer" containerID="4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.572794 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.589541 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.604858 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.605438 4771 scope.go:117] "RemoveContainer" containerID="94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" Jan 23 13:54:00 crc kubenswrapper[4771]: E0123 13:54:00.605529 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-log" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.605552 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-log" Jan 23 13:54:00 crc kubenswrapper[4771]: E0123 13:54:00.605570 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-metadata" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.605576 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-metadata" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.605797 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-log" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.605827 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" containerName="nova-metadata-metadata" Jan 23 13:54:00 crc kubenswrapper[4771]: E0123 13:54:00.607006 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035\": container with ID starting with 94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035 not found: ID does not exist" containerID="94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.607034 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.607051 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035"} err="failed to get container status \"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035\": rpc error: code = NotFound desc = could not find container \"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035\": container with ID starting with 94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035 not found: ID does not exist" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.607081 4771 scope.go:117] "RemoveContainer" containerID="4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" Jan 23 13:54:00 crc kubenswrapper[4771]: E0123 13:54:00.609198 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76\": container with ID starting with 4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76 not found: ID does not exist" containerID="4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.613223 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76"} err="failed to get container status \"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76\": rpc error: code = NotFound desc = could not find container \"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76\": container with ID starting with 4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76 not found: ID does not exist" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.613266 4771 scope.go:117] "RemoveContainer" containerID="94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.613345 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.613620 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.614485 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035"} err="failed to get container status \"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035\": rpc error: code = NotFound desc = could not find container \"94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035\": container with ID starting with 94dd6c6bdaeb9d97e73ecbf16ff8f9e6a93e4dbb8680c67ad44e240d6d530035 not found: ID does not exist" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.614575 4771 scope.go:117] "RemoveContainer" containerID="4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.642941 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76"} err="failed to get container status \"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76\": rpc error: code = NotFound desc = could not find container \"4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76\": container with ID starting with 4a7bb5a3c446dd48cf77c0d90a4f8fbfb89cbcf95e0e17563052222a3dbc0d76 not found: ID does not exist" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652152 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gfd\" (UniqueName: \"kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652498 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652809 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652838 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.652875 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.755125 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.755177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.755220 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.755283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.755339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gfd\" (UniqueName: \"kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.756264 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.760834 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.762622 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.764207 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.776314 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gfd\" (UniqueName: \"kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd\") pod \"nova-metadata-0\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " pod="openstack/nova-metadata-0" Jan 23 13:54:00 crc kubenswrapper[4771]: I0123 13:54:00.932712 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:01 crc kubenswrapper[4771]: I0123 13:54:01.250258 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7cdf71-37b4-44e6-884d-7617c8f804c0" path="/var/lib/kubelet/pods/4f7cdf71-37b4-44e6-884d-7617c8f804c0/volumes" Jan 23 13:54:01 crc kubenswrapper[4771]: I0123 13:54:01.464002 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:01 crc kubenswrapper[4771]: W0123 13:54:01.464588 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d079d2_a569_4a0b_8842_bb92745c5dcd.slice/crio-ad94760f76990d45242268c768e8e90977ea4472814eac6ffbc0012dbbbc832e WatchSource:0}: Error finding container ad94760f76990d45242268c768e8e90977ea4472814eac6ffbc0012dbbbc832e: Status 404 returned error can't find the container with id ad94760f76990d45242268c768e8e90977ea4472814eac6ffbc0012dbbbc832e Jan 23 13:54:01 crc kubenswrapper[4771]: I0123 13:54:01.521113 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerStarted","Data":"ad94760f76990d45242268c768e8e90977ea4472814eac6ffbc0012dbbbc832e"} Jan 23 13:54:02 crc kubenswrapper[4771]: I0123 13:54:02.541817 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerStarted","Data":"9e557997f5a0299251316eefbcc9f43b906bf883f55d7c2d8241531737935b7e"} Jan 23 13:54:02 crc kubenswrapper[4771]: I0123 13:54:02.542624 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerStarted","Data":"2237ca52d620d6e33bc46863124b3fb6857dbeb6b34e02db0a3906ebf030e0df"} Jan 23 13:54:02 crc kubenswrapper[4771]: I0123 13:54:02.569201 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.569172783 podStartE2EDuration="2.569172783s" podCreationTimestamp="2026-01-23 13:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:02.566625922 +0000 UTC m=+1283.589163587" watchObservedRunningTime="2026-01-23 13:54:02.569172783 +0000 UTC m=+1283.591710448" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.044249 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.044696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.086807 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.086859 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.150566 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.174239 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.250073 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.334291 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.334725 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="dnsmasq-dns" containerID="cri-o://33f34933169f5dffb367a4fee829842938952249f48bab7f7aa6d8e735d9b343" gracePeriod=10 Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.577890 4771 generic.go:334] "Generic (PLEG): container finished" podID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerID="33f34933169f5dffb367a4fee829842938952249f48bab7f7aa6d8e735d9b343" exitCode=0 Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.578497 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" event={"ID":"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1","Type":"ContainerDied","Data":"33f34933169f5dffb367a4fee829842938952249f48bab7f7aa6d8e735d9b343"} Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.619862 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 13:54:03 crc kubenswrapper[4771]: I0123 13:54:03.978914 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074272 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074481 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv477\" (UniqueName: \"kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074526 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.074683 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.085759 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.085890 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.095844 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477" (OuterVolumeSpecName: "kube-api-access-sv477") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "kube-api-access-sv477". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: E0123 13:54:04.132773 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb9ca3b_06d1_428d_a140_b946a9ef5931.slice/crio-82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb9ca3b_06d1_428d_a140_b946a9ef5931.slice/crio-conmon-82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.160933 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.176267 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.176720 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") pod \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\" (UID: \"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1\") " Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.177312 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv477\" (UniqueName: \"kubernetes.io/projected/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-kube-api-access-sv477\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.177340 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: W0123 13:54:04.177491 4771 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1/volumes/kubernetes.io~configmap/ovsdbserver-nb Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.177511 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.190911 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.205057 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config" (OuterVolumeSpecName: "config") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.224355 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" (UID: "1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.279469 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.279519 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.279529 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.279539 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.590925 4771 generic.go:334] "Generic (PLEG): container finished" podID="ebb9ca3b-06d1-428d-a140-b946a9ef5931" containerID="82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583" exitCode=0 Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.591017 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zchgr" event={"ID":"ebb9ca3b-06d1-428d-a140-b946a9ef5931","Type":"ContainerDied","Data":"82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583"} Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.596609 4771 generic.go:334] "Generic (PLEG): container finished" podID="3bdfda9a-c75f-412b-81f4-b33bb47d9435" containerID="2aad550a44cab96e05de1c3e5f32f531e3aefb34835ef16ffd27b4740199bef3" exitCode=0 Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.596737 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" event={"ID":"3bdfda9a-c75f-412b-81f4-b33bb47d9435","Type":"ContainerDied","Data":"2aad550a44cab96e05de1c3e5f32f531e3aefb34835ef16ffd27b4740199bef3"} Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.599860 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.600659 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fc64bcc5-zfnkn" event={"ID":"1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1","Type":"ContainerDied","Data":"d7aa5ebdbad8e0ca698395dea62f86618c05bb15828d0ff108b7cdefa86a1dbf"} Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.600729 4771 scope.go:117] "RemoveContainer" containerID="33f34933169f5dffb367a4fee829842938952249f48bab7f7aa6d8e735d9b343" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.644370 4771 scope.go:117] "RemoveContainer" containerID="61e5669029c699331a5f8fd4c9488128ce993e748e2f0d4ca0ea661743f23f2b" Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.683341 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:54:04 crc kubenswrapper[4771]: I0123 13:54:04.694965 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fc64bcc5-zfnkn"] Jan 23 13:54:05 crc kubenswrapper[4771]: I0123 13:54:05.244893 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" path="/var/lib/kubelet/pods/1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1/volumes" Jan 23 13:54:05 crc kubenswrapper[4771]: I0123 13:54:05.710106 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:05.934057 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:05.934178 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.178373 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.189461 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352530 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fzvx\" (UniqueName: \"kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx\") pod \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data\") pod \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352716 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle\") pod \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352820 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts\") pod \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352882 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle\") pod \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.352979 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqh7p\" (UniqueName: \"kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p\") pod \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.353083 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts\") pod \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\" (UID: \"ebb9ca3b-06d1-428d-a140-b946a9ef5931\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.353935 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data\") pod \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\" (UID: \"3bdfda9a-c75f-412b-81f4-b33bb47d9435\") " Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.361216 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts" (OuterVolumeSpecName: "scripts") pod "ebb9ca3b-06d1-428d-a140-b946a9ef5931" (UID: "ebb9ca3b-06d1-428d-a140-b946a9ef5931"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.364579 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx" (OuterVolumeSpecName: "kube-api-access-8fzvx") pod "ebb9ca3b-06d1-428d-a140-b946a9ef5931" (UID: "ebb9ca3b-06d1-428d-a140-b946a9ef5931"). InnerVolumeSpecName "kube-api-access-8fzvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.379701 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts" (OuterVolumeSpecName: "scripts") pod "3bdfda9a-c75f-412b-81f4-b33bb47d9435" (UID: "3bdfda9a-c75f-412b-81f4-b33bb47d9435"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.379739 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p" (OuterVolumeSpecName: "kube-api-access-tqh7p") pod "3bdfda9a-c75f-412b-81f4-b33bb47d9435" (UID: "3bdfda9a-c75f-412b-81f4-b33bb47d9435"). InnerVolumeSpecName "kube-api-access-tqh7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.397185 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data" (OuterVolumeSpecName: "config-data") pod "ebb9ca3b-06d1-428d-a140-b946a9ef5931" (UID: "ebb9ca3b-06d1-428d-a140-b946a9ef5931"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.397615 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data" (OuterVolumeSpecName: "config-data") pod "3bdfda9a-c75f-412b-81f4-b33bb47d9435" (UID: "3bdfda9a-c75f-412b-81f4-b33bb47d9435"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.402295 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bdfda9a-c75f-412b-81f4-b33bb47d9435" (UID: "3bdfda9a-c75f-412b-81f4-b33bb47d9435"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.416229 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebb9ca3b-06d1-428d-a140-b946a9ef5931" (UID: "ebb9ca3b-06d1-428d-a140-b946a9ef5931"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458776 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458821 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458836 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fzvx\" (UniqueName: \"kubernetes.io/projected/ebb9ca3b-06d1-428d-a140-b946a9ef5931-kube-api-access-8fzvx\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458851 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458863 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458872 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdfda9a-c75f-412b-81f4-b33bb47d9435-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458883 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9ca3b-06d1-428d-a140-b946a9ef5931-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.458892 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqh7p\" (UniqueName: \"kubernetes.io/projected/3bdfda9a-c75f-412b-81f4-b33bb47d9435-kube-api-access-tqh7p\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.649084 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" event={"ID":"3bdfda9a-c75f-412b-81f4-b33bb47d9435","Type":"ContainerDied","Data":"b1cb769a6e4c22ee9bce4eba2f00b8b793ae250674cf5aa35a5ed6923c95d00f"} Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.649146 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1cb769a6e4c22ee9bce4eba2f00b8b793ae250674cf5aa35a5ed6923c95d00f" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.649265 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lwjw4" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.656924 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zchgr" event={"ID":"ebb9ca3b-06d1-428d-a140-b946a9ef5931","Type":"ContainerDied","Data":"ec572735ade6be7ce526cdcd850f3b00ad35cdb035d6ea1f9ff02d9d090a5553"} Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.656980 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec572735ade6be7ce526cdcd850f3b00ad35cdb035d6ea1f9ff02d9d090a5553" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.657048 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zchgr" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.774764 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 13:54:06 crc kubenswrapper[4771]: E0123 13:54:06.775394 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="dnsmasq-dns" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775435 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="dnsmasq-dns" Jan 23 13:54:06 crc kubenswrapper[4771]: E0123 13:54:06.775478 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bdfda9a-c75f-412b-81f4-b33bb47d9435" containerName="nova-cell1-conductor-db-sync" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775491 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bdfda9a-c75f-412b-81f4-b33bb47d9435" containerName="nova-cell1-conductor-db-sync" Jan 23 13:54:06 crc kubenswrapper[4771]: E0123 13:54:06.775509 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="init" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775519 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="init" Jan 23 13:54:06 crc kubenswrapper[4771]: E0123 13:54:06.775546 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9ca3b-06d1-428d-a140-b946a9ef5931" containerName="nova-manage" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775555 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9ca3b-06d1-428d-a140-b946a9ef5931" containerName="nova-manage" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775886 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bdfda9a-c75f-412b-81f4-b33bb47d9435" containerName="nova-cell1-conductor-db-sync" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775930 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c1066c4-8331-4f5e-91e5-0ddc8c40f7f1" containerName="dnsmasq-dns" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.775952 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9ca3b-06d1-428d-a140-b946a9ef5931" containerName="nova-manage" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.776993 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.782573 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.794970 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.869151 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.869206 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.869311 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ftcl\" (UniqueName: \"kubernetes.io/projected/77ab6d81-eada-4016-a21b-4319283e7b50-kube-api-access-5ftcl\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.920095 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.920492 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-log" containerID="cri-o://3f682d6e6871ac5735e39302bc8de0f7d8f6278d282874386f6c84485a8418d6" gracePeriod=30 Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.920798 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-api" containerID="cri-o://880a7cf5997d3e42cbd816d1cc72fd6bafe906812575337055044fd73e6bf75e" gracePeriod=30 Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.936099 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.936366 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerName="nova-scheduler-scheduler" containerID="cri-o://1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" gracePeriod=30 Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.971219 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ftcl\" (UniqueName: \"kubernetes.io/projected/77ab6d81-eada-4016-a21b-4319283e7b50-kube-api-access-5ftcl\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.971341 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.971366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.984710 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.985963 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.986187 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-log" containerID="cri-o://2237ca52d620d6e33bc46863124b3fb6857dbeb6b34e02db0a3906ebf030e0df" gracePeriod=30 Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.986699 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-metadata" containerID="cri-o://9e557997f5a0299251316eefbcc9f43b906bf883f55d7c2d8241531737935b7e" gracePeriod=30 Jan 23 13:54:06 crc kubenswrapper[4771]: I0123 13:54:06.991672 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ab6d81-eada-4016-a21b-4319283e7b50-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.014385 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ftcl\" (UniqueName: \"kubernetes.io/projected/77ab6d81-eada-4016-a21b-4319283e7b50-kube-api-access-5ftcl\") pod \"nova-cell1-conductor-0\" (UID: \"77ab6d81-eada-4016-a21b-4319283e7b50\") " pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.112049 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.674721 4771 generic.go:334] "Generic (PLEG): container finished" podID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerID="3f682d6e6871ac5735e39302bc8de0f7d8f6278d282874386f6c84485a8418d6" exitCode=143 Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.675156 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerDied","Data":"3f682d6e6871ac5735e39302bc8de0f7d8f6278d282874386f6c84485a8418d6"} Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.680557 4771 generic.go:334] "Generic (PLEG): container finished" podID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerID="9e557997f5a0299251316eefbcc9f43b906bf883f55d7c2d8241531737935b7e" exitCode=0 Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.680601 4771 generic.go:334] "Generic (PLEG): container finished" podID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerID="2237ca52d620d6e33bc46863124b3fb6857dbeb6b34e02db0a3906ebf030e0df" exitCode=143 Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.680628 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerDied","Data":"9e557997f5a0299251316eefbcc9f43b906bf883f55d7c2d8241531737935b7e"} Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.680661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerDied","Data":"2237ca52d620d6e33bc46863124b3fb6857dbeb6b34e02db0a3906ebf030e0df"} Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.890526 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 13:54:07 crc kubenswrapper[4771]: I0123 13:54:07.970471 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.095085 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.103651 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.103836 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs\") pod \"97d079d2-a569-4a0b-8842-bb92745c5dcd\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.104239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data\") pod \"97d079d2-a569-4a0b-8842-bb92745c5dcd\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.104300 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs\") pod \"97d079d2-a569-4a0b-8842-bb92745c5dcd\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.104331 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle\") pod \"97d079d2-a569-4a0b-8842-bb92745c5dcd\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.104430 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6gfd\" (UniqueName: \"kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd\") pod \"97d079d2-a569-4a0b-8842-bb92745c5dcd\" (UID: \"97d079d2-a569-4a0b-8842-bb92745c5dcd\") " Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.104821 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs" (OuterVolumeSpecName: "logs") pod "97d079d2-a569-4a0b-8842-bb92745c5dcd" (UID: "97d079d2-a569-4a0b-8842-bb92745c5dcd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.105985 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.106110 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerName="nova-scheduler-scheduler" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.120528 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97d079d2-a569-4a0b-8842-bb92745c5dcd-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.126846 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd" (OuterVolumeSpecName: "kube-api-access-l6gfd") pod "97d079d2-a569-4a0b-8842-bb92745c5dcd" (UID: "97d079d2-a569-4a0b-8842-bb92745c5dcd"). InnerVolumeSpecName "kube-api-access-l6gfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.183024 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data" (OuterVolumeSpecName: "config-data") pod "97d079d2-a569-4a0b-8842-bb92745c5dcd" (UID: "97d079d2-a569-4a0b-8842-bb92745c5dcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.209098 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97d079d2-a569-4a0b-8842-bb92745c5dcd" (UID: "97d079d2-a569-4a0b-8842-bb92745c5dcd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.224364 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.224493 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.224552 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6gfd\" (UniqueName: \"kubernetes.io/projected/97d079d2-a569-4a0b-8842-bb92745c5dcd-kube-api-access-l6gfd\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.247186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "97d079d2-a569-4a0b-8842-bb92745c5dcd" (UID: "97d079d2-a569-4a0b-8842-bb92745c5dcd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.326587 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97d079d2-a569-4a0b-8842-bb92745c5dcd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.694145 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97d079d2-a569-4a0b-8842-bb92745c5dcd","Type":"ContainerDied","Data":"ad94760f76990d45242268c768e8e90977ea4472814eac6ffbc0012dbbbc832e"} Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.694218 4771 scope.go:117] "RemoveContainer" containerID="9e557997f5a0299251316eefbcc9f43b906bf883f55d7c2d8241531737935b7e" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.694393 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.699216 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"77ab6d81-eada-4016-a21b-4319283e7b50","Type":"ContainerStarted","Data":"d42ca4217900a84b07b2ea2261b93273664d144a68663b09bdda289fcb8e59ad"} Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.699273 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"77ab6d81-eada-4016-a21b-4319283e7b50","Type":"ContainerStarted","Data":"c99ee33764304b9ec6cf324a5f8be36f1a2b781e194fafbd40b9f008de0b21ad"} Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.699585 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.754861 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.754836047 podStartE2EDuration="2.754836047s" podCreationTimestamp="2026-01-23 13:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:08.74450772 +0000 UTC m=+1289.767045345" watchObservedRunningTime="2026-01-23 13:54:08.754836047 +0000 UTC m=+1289.777373672" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.758910 4771 scope.go:117] "RemoveContainer" containerID="2237ca52d620d6e33bc46863124b3fb6857dbeb6b34e02db0a3906ebf030e0df" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.795475 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.819608 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.839510 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.839979 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-metadata" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.839995 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-metadata" Jan 23 13:54:08 crc kubenswrapper[4771]: E0123 13:54:08.844516 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-log" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.844546 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-log" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.844920 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-log" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.844938 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" containerName="nova-metadata-metadata" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.846174 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.850200 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.854260 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.854480 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.940550 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqm6\" (UniqueName: \"kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.940743 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.940887 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.940961 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:08 crc kubenswrapper[4771]: I0123 13:54:08.941131 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.043440 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqm6\" (UniqueName: \"kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.043548 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.043610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.043644 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.043724 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.044247 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.049392 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.049639 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.064752 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.067635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqm6\" (UniqueName: \"kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6\") pod \"nova-metadata-0\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.185191 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.246257 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d079d2-a569-4a0b-8842-bb92745c5dcd" path="/var/lib/kubelet/pods/97d079d2-a569-4a0b-8842-bb92745c5dcd/volumes" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.724545 4771 generic.go:334] "Generic (PLEG): container finished" podID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerID="880a7cf5997d3e42cbd816d1cc72fd6bafe906812575337055044fd73e6bf75e" exitCode=0 Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.724838 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerDied","Data":"880a7cf5997d3e42cbd816d1cc72fd6bafe906812575337055044fd73e6bf75e"} Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.776898 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.854157 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.867527 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ksc\" (UniqueName: \"kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc\") pod \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.867708 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle\") pod \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.867745 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data\") pod \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.867866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs\") pod \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\" (UID: \"977d0eeb-f7ea-44fd-b2b9-5ec27f505119\") " Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.871538 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs" (OuterVolumeSpecName: "logs") pod "977d0eeb-f7ea-44fd-b2b9-5ec27f505119" (UID: "977d0eeb-f7ea-44fd-b2b9-5ec27f505119"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.875963 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc" (OuterVolumeSpecName: "kube-api-access-r2ksc") pod "977d0eeb-f7ea-44fd-b2b9-5ec27f505119" (UID: "977d0eeb-f7ea-44fd-b2b9-5ec27f505119"). InnerVolumeSpecName "kube-api-access-r2ksc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.907581 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "977d0eeb-f7ea-44fd-b2b9-5ec27f505119" (UID: "977d0eeb-f7ea-44fd-b2b9-5ec27f505119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.949628 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data" (OuterVolumeSpecName: "config-data") pod "977d0eeb-f7ea-44fd-b2b9-5ec27f505119" (UID: "977d0eeb-f7ea-44fd-b2b9-5ec27f505119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.971892 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.971925 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ksc\" (UniqueName: \"kubernetes.io/projected/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-kube-api-access-r2ksc\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.971944 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:09 crc kubenswrapper[4771]: I0123 13:54:09.971960 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977d0eeb-f7ea-44fd-b2b9-5ec27f505119-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.742615 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"977d0eeb-f7ea-44fd-b2b9-5ec27f505119","Type":"ContainerDied","Data":"2d71f5c19d8e28365dc4abb621829fee64561c976b0d326f17c8655aa78e2807"} Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.743131 4771 scope.go:117] "RemoveContainer" containerID="880a7cf5997d3e42cbd816d1cc72fd6bafe906812575337055044fd73e6bf75e" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.743252 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.765636 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerStarted","Data":"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826"} Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.765690 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerStarted","Data":"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2"} Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.765701 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerStarted","Data":"f0cd5427e054d7d4d7ba224079a63521f8569a359ea9d86e0a0038c8adca2688"} Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.804481 4771 scope.go:117] "RemoveContainer" containerID="3f682d6e6871ac5735e39302bc8de0f7d8f6278d282874386f6c84485a8418d6" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.810318 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.828601 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.923321 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:10 crc kubenswrapper[4771]: E0123 13:54:10.925771 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-log" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.925830 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-log" Jan 23 13:54:10 crc kubenswrapper[4771]: E0123 13:54:10.925843 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-api" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.925850 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-api" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.926243 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-api" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.926258 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" containerName="nova-api-log" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.937659 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.942032 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.960481 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:10 crc kubenswrapper[4771]: I0123 13:54:10.970836 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.970813051 podStartE2EDuration="2.970813051s" podCreationTimestamp="2026-01-23 13:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:10.842024422 +0000 UTC m=+1291.864562047" watchObservedRunningTime="2026-01-23 13:54:10.970813051 +0000 UTC m=+1291.993350676" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.004934 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.005180 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.005300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.005378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrbs\" (UniqueName: \"kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.107856 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.108207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhrbs\" (UniqueName: \"kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.108269 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.108343 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.109125 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.116898 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.117116 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.128045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhrbs\" (UniqueName: \"kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs\") pod \"nova-api-0\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.257394 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977d0eeb-f7ea-44fd-b2b9-5ec27f505119" path="/var/lib/kubelet/pods/977d0eeb-f7ea-44fd-b2b9-5ec27f505119/volumes" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.261462 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.465858 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.527670 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7bqf\" (UniqueName: \"kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf\") pod \"93f8c133-f857-4fff-87e1-fd9b79e946eb\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.527828 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data\") pod \"93f8c133-f857-4fff-87e1-fd9b79e946eb\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.527950 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle\") pod \"93f8c133-f857-4fff-87e1-fd9b79e946eb\" (UID: \"93f8c133-f857-4fff-87e1-fd9b79e946eb\") " Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.548166 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf" (OuterVolumeSpecName: "kube-api-access-z7bqf") pod "93f8c133-f857-4fff-87e1-fd9b79e946eb" (UID: "93f8c133-f857-4fff-87e1-fd9b79e946eb"). InnerVolumeSpecName "kube-api-access-z7bqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.596276 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93f8c133-f857-4fff-87e1-fd9b79e946eb" (UID: "93f8c133-f857-4fff-87e1-fd9b79e946eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.633454 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data" (OuterVolumeSpecName: "config-data") pod "93f8c133-f857-4fff-87e1-fd9b79e946eb" (UID: "93f8c133-f857-4fff-87e1-fd9b79e946eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.635480 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7bqf\" (UniqueName: \"kubernetes.io/projected/93f8c133-f857-4fff-87e1-fd9b79e946eb-kube-api-access-z7bqf\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.635513 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.635523 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f8c133-f857-4fff-87e1-fd9b79e946eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.780730 4771 generic.go:334] "Generic (PLEG): container finished" podID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" exitCode=0 Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.780846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93f8c133-f857-4fff-87e1-fd9b79e946eb","Type":"ContainerDied","Data":"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9"} Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.780858 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.780900 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"93f8c133-f857-4fff-87e1-fd9b79e946eb","Type":"ContainerDied","Data":"3272db8b9b1b47c4ad8733d366ce509e4701c174530584aedf9fb68de8a22095"} Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.780932 4771 scope.go:117] "RemoveContainer" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.826007 4771 scope.go:117] "RemoveContainer" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" Jan 23 13:54:11 crc kubenswrapper[4771]: E0123 13:54:11.828735 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9\": container with ID starting with 1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9 not found: ID does not exist" containerID="1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.828778 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9"} err="failed to get container status \"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9\": rpc error: code = NotFound desc = could not find container \"1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9\": container with ID starting with 1ff63584c10ab6adf2912b6f5b7a578dbf5e13a0558d61272444b5ddb8ac9eb9 not found: ID does not exist" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.839040 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.873772 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.920497 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:11 crc kubenswrapper[4771]: E0123 13:54:11.921278 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerName="nova-scheduler-scheduler" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.921297 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerName="nova-scheduler-scheduler" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.921653 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" containerName="nova-scheduler-scheduler" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.922768 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.927321 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.928870 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:11 crc kubenswrapper[4771]: I0123 13:54:11.956513 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.045896 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.046461 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.046509 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7h5f\" (UniqueName: \"kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.148737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.149179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.149287 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7h5f\" (UniqueName: \"kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.149626 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.149930 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b4fa8367-bad7-4681-93a1-835923d93421" containerName="kube-state-metrics" containerID="cri-o://8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe" gracePeriod=30 Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.153287 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.154734 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.171007 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7h5f\" (UniqueName: \"kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f\") pod \"nova-scheduler-0\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.254854 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.728778 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.813956 4771 generic.go:334] "Generic (PLEG): container finished" podID="b4fa8367-bad7-4681-93a1-835923d93421" containerID="8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe" exitCode=2 Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.814056 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4fa8367-bad7-4681-93a1-835923d93421","Type":"ContainerDied","Data":"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe"} Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.814103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4fa8367-bad7-4681-93a1-835923d93421","Type":"ContainerDied","Data":"27e1804d80ac3542c96eef4969e1b8e395197254ec54c931eae86939ac153833"} Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.814130 4771 scope.go:117] "RemoveContainer" containerID="8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.814530 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.852501 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerStarted","Data":"9df6f80df8a28c5cb924f7024f6ec4d258b2d83a73b323c8ecaeecf9426f0cea"} Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.852557 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerStarted","Data":"16ed9393b594bcba9995cbddd987d4af316280f93afecf4901d8a1ac194fb57f"} Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.852570 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerStarted","Data":"5f40805d6f417a039a318bb56f7fa61ed404267f85f8800ebcbb9dda1f9aa34b"} Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.866496 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb2ww\" (UniqueName: \"kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww\") pod \"b4fa8367-bad7-4681-93a1-835923d93421\" (UID: \"b4fa8367-bad7-4681-93a1-835923d93421\") " Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.884383 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww" (OuterVolumeSpecName: "kube-api-access-jb2ww") pod "b4fa8367-bad7-4681-93a1-835923d93421" (UID: "b4fa8367-bad7-4681-93a1-835923d93421"). InnerVolumeSpecName "kube-api-access-jb2ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.884518 4771 scope.go:117] "RemoveContainer" containerID="8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe" Jan 23 13:54:12 crc kubenswrapper[4771]: E0123 13:54:12.888162 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe\": container with ID starting with 8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe not found: ID does not exist" containerID="8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.888215 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe"} err="failed to get container status \"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe\": rpc error: code = NotFound desc = could not find container \"8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe\": container with ID starting with 8ef94d4addde4401b311aa594a7f75e8347b6eb4f206dabcf6edd9609265e8fe not found: ID does not exist" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.898072 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.900818 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.900796418 podStartE2EDuration="2.900796418s" podCreationTimestamp="2026-01-23 13:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:12.877149049 +0000 UTC m=+1293.899686694" watchObservedRunningTime="2026-01-23 13:54:12.900796418 +0000 UTC m=+1293.923334043" Jan 23 13:54:12 crc kubenswrapper[4771]: I0123 13:54:12.975180 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb2ww\" (UniqueName: \"kubernetes.io/projected/b4fa8367-bad7-4681-93a1-835923d93421-kube-api-access-jb2ww\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.201594 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.218736 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.275737 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f8c133-f857-4fff-87e1-fd9b79e946eb" path="/var/lib/kubelet/pods/93f8c133-f857-4fff-87e1-fd9b79e946eb/volumes" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.276774 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4fa8367-bad7-4681-93a1-835923d93421" path="/var/lib/kubelet/pods/b4fa8367-bad7-4681-93a1-835923d93421/volumes" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.277560 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:13 crc kubenswrapper[4771]: E0123 13:54:13.301190 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4fa8367-bad7-4681-93a1-835923d93421" containerName="kube-state-metrics" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.301341 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4fa8367-bad7-4681-93a1-835923d93421" containerName="kube-state-metrics" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.309012 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4fa8367-bad7-4681-93a1-835923d93421" containerName="kube-state-metrics" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.312422 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.322096 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.322516 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.401661 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6gp\" (UniqueName: \"kubernetes.io/projected/0b974486-d677-48cc-acaf-785af0b5555a-kube-api-access-mv6gp\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.401772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.401969 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.402029 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.409139 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.504384 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.504469 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.504624 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6gp\" (UniqueName: \"kubernetes.io/projected/0b974486-d677-48cc-acaf-785af0b5555a-kube-api-access-mv6gp\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.504665 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.522724 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.522931 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.525846 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6gp\" (UniqueName: \"kubernetes.io/projected/0b974486-d677-48cc-acaf-785af0b5555a-kube-api-access-mv6gp\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.525870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b974486-d677-48cc-acaf-785af0b5555a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0b974486-d677-48cc-acaf-785af0b5555a\") " pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.719430 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.867438 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d99fb4dc-8562-41cc-a3a4-a4a00538ad51","Type":"ContainerStarted","Data":"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499"} Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.867970 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d99fb4dc-8562-41cc-a3a4-a4a00538ad51","Type":"ContainerStarted","Data":"afff33868df1c1b082ebde144da22414e36fd2d46c20fa072b46e5d6818c687c"} Jan 23 13:54:13 crc kubenswrapper[4771]: I0123 13:54:13.905780 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.905759117 podStartE2EDuration="2.905759117s" podCreationTimestamp="2026-01-23 13:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:13.900065737 +0000 UTC m=+1294.922603372" watchObservedRunningTime="2026-01-23 13:54:13.905759117 +0000 UTC m=+1294.928296742" Jan 23 13:54:14 crc kubenswrapper[4771]: I0123 13:54:14.186333 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:54:14 crc kubenswrapper[4771]: I0123 13:54:14.186393 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:54:14 crc kubenswrapper[4771]: I0123 13:54:14.288803 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 13:54:14 crc kubenswrapper[4771]: I0123 13:54:14.901369 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b974486-d677-48cc-acaf-785af0b5555a","Type":"ContainerStarted","Data":"679afbe935e5c5bd996e5c05802877735e1d3ebf6a91d1e9e59cbbbca8f0dbcb"} Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.004535 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.004934 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-central-agent" containerID="cri-o://29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7" gracePeriod=30 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.005643 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="proxy-httpd" containerID="cri-o://87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13" gracePeriod=30 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.005650 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-notification-agent" containerID="cri-o://eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72" gracePeriod=30 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.005686 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="sg-core" containerID="cri-o://02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086" gracePeriod=30 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912216 4771 generic.go:334] "Generic (PLEG): container finished" podID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerID="87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13" exitCode=0 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912682 4771 generic.go:334] "Generic (PLEG): container finished" podID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerID="02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086" exitCode=2 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912693 4771 generic.go:334] "Generic (PLEG): container finished" podID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerID="29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7" exitCode=0 Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerDied","Data":"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13"} Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912758 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerDied","Data":"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086"} Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.912775 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerDied","Data":"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7"} Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.914071 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b974486-d677-48cc-acaf-785af0b5555a","Type":"ContainerStarted","Data":"c1a4e6c8d5cd3cc5b81725d76a42fbbd6f93a1a07a0ea6df5c7007cb80d0a823"} Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.914208 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 13:54:15 crc kubenswrapper[4771]: I0123 13:54:15.938834 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.524781604 podStartE2EDuration="2.938809857s" podCreationTimestamp="2026-01-23 13:54:13 +0000 UTC" firstStartedPulling="2026-01-23 13:54:14.296217574 +0000 UTC m=+1295.318755199" lastFinishedPulling="2026-01-23 13:54:14.710245827 +0000 UTC m=+1295.732783452" observedRunningTime="2026-01-23 13:54:15.933929223 +0000 UTC m=+1296.956466838" watchObservedRunningTime="2026-01-23 13:54:15.938809857 +0000 UTC m=+1296.961347482" Jan 23 13:54:17 crc kubenswrapper[4771]: I0123 13:54:17.150836 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 13:54:17 crc kubenswrapper[4771]: I0123 13:54:17.255536 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 13:54:19 crc kubenswrapper[4771]: I0123 13:54:19.185499 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 13:54:19 crc kubenswrapper[4771]: I0123 13:54:19.185925 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.199709 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.199772 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.871717 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902397 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902512 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7n9k\" (UniqueName: \"kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902574 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902672 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902817 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902889 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.902943 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd\") pod \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\" (UID: \"837f7d4b-577b-4f75-b0ce-361aa9c6e82a\") " Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.903940 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.904348 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.911738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts" (OuterVolumeSpecName: "scripts") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.913248 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k" (OuterVolumeSpecName: "kube-api-access-s7n9k") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "kube-api-access-s7n9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.943969 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.988339 4771 generic.go:334] "Generic (PLEG): container finished" podID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerID="eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72" exitCode=0 Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.988388 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerDied","Data":"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72"} Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.988552 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"837f7d4b-577b-4f75-b0ce-361aa9c6e82a","Type":"ContainerDied","Data":"38c23fa1ad827fd03ffe043ae62d13fc5dc04fffa59ce02da29cdfa6cf96ff1e"} Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.988576 4771 scope.go:117] "RemoveContainer" containerID="87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13" Jan 23 13:54:20 crc kubenswrapper[4771]: I0123 13:54:20.988695 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.006865 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.006905 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.006915 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7n9k\" (UniqueName: \"kubernetes.io/projected/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-kube-api-access-s7n9k\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.006927 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.006935 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.036325 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.050063 4771 scope.go:117] "RemoveContainer" containerID="02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.078756 4771 scope.go:117] "RemoveContainer" containerID="eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.089766 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data" (OuterVolumeSpecName: "config-data") pod "837f7d4b-577b-4f75-b0ce-361aa9c6e82a" (UID: "837f7d4b-577b-4f75-b0ce-361aa9c6e82a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.102177 4771 scope.go:117] "RemoveContainer" containerID="29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.109351 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.109391 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837f7d4b-577b-4f75-b0ce-361aa9c6e82a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.133889 4771 scope.go:117] "RemoveContainer" containerID="87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.134932 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13\": container with ID starting with 87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13 not found: ID does not exist" containerID="87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.134989 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13"} err="failed to get container status \"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13\": rpc error: code = NotFound desc = could not find container \"87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13\": container with ID starting with 87a57fbdae2c778e27c0264088951d77c6e37cbacb513dc567a75634f8912e13 not found: ID does not exist" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.135023 4771 scope.go:117] "RemoveContainer" containerID="02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.135925 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086\": container with ID starting with 02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086 not found: ID does not exist" containerID="02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.136001 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086"} err="failed to get container status \"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086\": rpc error: code = NotFound desc = could not find container \"02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086\": container with ID starting with 02a17894d7a1f775fd2c79ef3735e62ca2f2d7153c10ddbac1f2ad42d5d5e086 not found: ID does not exist" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.136036 4771 scope.go:117] "RemoveContainer" containerID="eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.136652 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72\": container with ID starting with eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72 not found: ID does not exist" containerID="eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.136690 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72"} err="failed to get container status \"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72\": rpc error: code = NotFound desc = could not find container \"eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72\": container with ID starting with eb7977210dcf52cbaf2dd702be2855fa8a12fcb13fcba2cc53972b00a894df72 not found: ID does not exist" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.136721 4771 scope.go:117] "RemoveContainer" containerID="29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.137025 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7\": container with ID starting with 29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7 not found: ID does not exist" containerID="29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.137044 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7"} err="failed to get container status \"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7\": rpc error: code = NotFound desc = could not find container \"29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7\": container with ID starting with 29bcf1624060b24fe15b5190553f55318d9647d0a29ff024e4ba74f6efac9ac7 not found: ID does not exist" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.264645 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.264704 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.357579 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.381049 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.403057 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.405106 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="proxy-httpd" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.411449 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="proxy-httpd" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.411718 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-notification-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.411830 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-notification-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.411918 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="sg-core" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.411987 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="sg-core" Jan 23 13:54:21 crc kubenswrapper[4771]: E0123 13:54:21.412072 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-central-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.412135 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-central-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.412685 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-notification-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.412833 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="proxy-httpd" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.412923 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="sg-core" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.412999 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" containerName="ceilometer-central-agent" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.417361 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.422590 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.422972 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.423262 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.424828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.518941 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519309 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84xj4\" (UniqueName: \"kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519466 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519564 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519791 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519894 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.519974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622149 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622707 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622764 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84xj4\" (UniqueName: \"kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622838 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622873 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.622896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.623097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.623435 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.623566 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.628259 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.642765 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.644464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.644559 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.645154 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.650072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84xj4\" (UniqueName: \"kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4\") pod \"ceilometer-0\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " pod="openstack/ceilometer-0" Jan 23 13:54:21 crc kubenswrapper[4771]: I0123 13:54:21.736591 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:22 crc kubenswrapper[4771]: I0123 13:54:22.255871 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 13:54:22 crc kubenswrapper[4771]: I0123 13:54:22.295244 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 13:54:22 crc kubenswrapper[4771]: I0123 13:54:22.317298 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:22 crc kubenswrapper[4771]: W0123 13:54:22.326240 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b82830b_998a_4c09_81fe_d34ae7b13f36.slice/crio-5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff WatchSource:0}: Error finding container 5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff: Status 404 returned error can't find the container with id 5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff Jan 23 13:54:22 crc kubenswrapper[4771]: I0123 13:54:22.346648 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:22 crc kubenswrapper[4771]: I0123 13:54:22.347030 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:23 crc kubenswrapper[4771]: I0123 13:54:23.018444 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerStarted","Data":"94d0bb47ad9e6559e7f42169442801d1066ac95724b506586577a8f023776155"} Jan 23 13:54:23 crc kubenswrapper[4771]: I0123 13:54:23.018991 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerStarted","Data":"859d79b828dbf7e9daaa88ef38ed160053969a50e469c2299c23907fba71922d"} Jan 23 13:54:23 crc kubenswrapper[4771]: I0123 13:54:23.019009 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerStarted","Data":"5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff"} Jan 23 13:54:23 crc kubenswrapper[4771]: I0123 13:54:23.059112 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 13:54:23 crc kubenswrapper[4771]: I0123 13:54:23.247539 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837f7d4b-577b-4f75-b0ce-361aa9c6e82a" path="/var/lib/kubelet/pods/837f7d4b-577b-4f75-b0ce-361aa9c6e82a/volumes" Jan 23 13:54:24 crc kubenswrapper[4771]: I0123 13:54:23.966029 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 13:54:24 crc kubenswrapper[4771]: I0123 13:54:24.080756 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerStarted","Data":"656cd170138c0c79d8cd8bfb79f693a7a05eef253e7d7245c473cfdb0341bccb"} Jan 23 13:54:26 crc kubenswrapper[4771]: I0123 13:54:26.106003 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerStarted","Data":"87d226406f3060a5893186f01c0a01237893b6d71a1105bff8164c0c403aa820"} Jan 23 13:54:26 crc kubenswrapper[4771]: I0123 13:54:26.108611 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:54:26 crc kubenswrapper[4771]: I0123 13:54:26.154842 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.947912412 podStartE2EDuration="5.154817861s" podCreationTimestamp="2026-01-23 13:54:21 +0000 UTC" firstStartedPulling="2026-01-23 13:54:22.329787814 +0000 UTC m=+1303.352325439" lastFinishedPulling="2026-01-23 13:54:25.536693223 +0000 UTC m=+1306.559230888" observedRunningTime="2026-01-23 13:54:26.134461176 +0000 UTC m=+1307.156998831" watchObservedRunningTime="2026-01-23 13:54:26.154817861 +0000 UTC m=+1307.177355486" Jan 23 13:54:28 crc kubenswrapper[4771]: I0123 13:54:28.992719 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.037612 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j72h\" (UniqueName: \"kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h\") pod \"d33af987-32a1-48ce-878c-534c0d3801aa\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.037699 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data\") pod \"d33af987-32a1-48ce-878c-534c0d3801aa\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.037726 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle\") pod \"d33af987-32a1-48ce-878c-534c0d3801aa\" (UID: \"d33af987-32a1-48ce-878c-534c0d3801aa\") " Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.053795 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h" (OuterVolumeSpecName: "kube-api-access-8j72h") pod "d33af987-32a1-48ce-878c-534c0d3801aa" (UID: "d33af987-32a1-48ce-878c-534c0d3801aa"). InnerVolumeSpecName "kube-api-access-8j72h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.086043 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d33af987-32a1-48ce-878c-534c0d3801aa" (UID: "d33af987-32a1-48ce-878c-534c0d3801aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.100757 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data" (OuterVolumeSpecName: "config-data") pod "d33af987-32a1-48ce-878c-534c0d3801aa" (UID: "d33af987-32a1-48ce-878c-534c0d3801aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.139847 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j72h\" (UniqueName: \"kubernetes.io/projected/d33af987-32a1-48ce-878c-534c0d3801aa-kube-api-access-8j72h\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.139887 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.139899 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33af987-32a1-48ce-878c-534c0d3801aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.146104 4771 generic.go:334] "Generic (PLEG): container finished" podID="d33af987-32a1-48ce-878c-534c0d3801aa" containerID="9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480" exitCode=137 Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.146174 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.146174 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d33af987-32a1-48ce-878c-534c0d3801aa","Type":"ContainerDied","Data":"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480"} Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.146318 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d33af987-32a1-48ce-878c-534c0d3801aa","Type":"ContainerDied","Data":"1079360b25070395ce302ba7303beada174c4303f8223e9e7db61e88b58ef87c"} Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.146353 4771 scope.go:117] "RemoveContainer" containerID="9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.192693 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.197049 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.197604 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.223280 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.223836 4771 scope.go:117] "RemoveContainer" containerID="9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480" Jan 23 13:54:29 crc kubenswrapper[4771]: E0123 13:54:29.226951 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480\": container with ID starting with 9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480 not found: ID does not exist" containerID="9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.227022 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480"} err="failed to get container status \"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480\": rpc error: code = NotFound desc = could not find container \"9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480\": container with ID starting with 9ff00b6d00d376c851a813768f779c362c0e95d6aaa0e64432ee6bc07d5f1480 not found: ID does not exist" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.262480 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.283839 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:54:29 crc kubenswrapper[4771]: E0123 13:54:29.284554 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33af987-32a1-48ce-878c-534c0d3801aa" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.284578 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33af987-32a1-48ce-878c-534c0d3801aa" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.284894 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33af987-32a1-48ce-878c-534c0d3801aa" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.286065 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.290910 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.291244 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.291705 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.291947 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.353154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.354655 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.355395 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.355650 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xhb\" (UniqueName: \"kubernetes.io/projected/33537960-af54-4801-9017-f01b27b5e8e0-kube-api-access-69xhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.355762 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.458167 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xhb\" (UniqueName: \"kubernetes.io/projected/33537960-af54-4801-9017-f01b27b5e8e0-kube-api-access-69xhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.458232 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.458293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.458359 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.458377 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.463704 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.465495 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.466177 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.474645 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33537960-af54-4801-9017-f01b27b5e8e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.479045 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xhb\" (UniqueName: \"kubernetes.io/projected/33537960-af54-4801-9017-f01b27b5e8e0-kube-api-access-69xhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"33537960-af54-4801-9017-f01b27b5e8e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:29 crc kubenswrapper[4771]: I0123 13:54:29.606774 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:30 crc kubenswrapper[4771]: I0123 13:54:30.122137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 13:54:30 crc kubenswrapper[4771]: I0123 13:54:30.160112 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33537960-af54-4801-9017-f01b27b5e8e0","Type":"ContainerStarted","Data":"c96ebd7b0cb92be421dc0461b162bbb18de85d9c86a889384b52c4f1f3dec16a"} Jan 23 13:54:30 crc kubenswrapper[4771]: I0123 13:54:30.176711 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 13:54:30 crc kubenswrapper[4771]: I0123 13:54:30.317012 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:54:30 crc kubenswrapper[4771]: I0123 13:54:30.317111 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.178845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"33537960-af54-4801-9017-f01b27b5e8e0","Type":"ContainerStarted","Data":"1adf7c0c687040d8ecd83b7fb2fa4cd72e0d9fa6f6be449e01d794a9adadbfa8"} Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.249834 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33af987-32a1-48ce-878c-534c0d3801aa" path="/var/lib/kubelet/pods/d33af987-32a1-48ce-878c-534c0d3801aa/volumes" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.536028 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.536719 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.538794 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.566129 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.566102238 podStartE2EDuration="2.566102238s" podCreationTimestamp="2026-01-23 13:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:31.197951047 +0000 UTC m=+1312.220488672" watchObservedRunningTime="2026-01-23 13:54:31.566102238 +0000 UTC m=+1312.588639863" Jan 23 13:54:31 crc kubenswrapper[4771]: I0123 13:54:31.570164 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.190638 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.198889 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.415784 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.419480 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.441090 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459347 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459424 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459540 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459566 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.459602 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89xw\" (UniqueName: \"kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.561596 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562253 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562295 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562333 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89xw\" (UniqueName: \"kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562509 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562558 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.562742 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.563454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.563543 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.564076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.565548 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.585778 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89xw\" (UniqueName: \"kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw\") pod \"dnsmasq-dns-59687d4f97-zfhbp\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:32 crc kubenswrapper[4771]: I0123 13:54:32.769560 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:33 crc kubenswrapper[4771]: I0123 13:54:33.323123 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:54:34 crc kubenswrapper[4771]: I0123 13:54:34.213742 4771 generic.go:334] "Generic (PLEG): container finished" podID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerID="38398ea628df335bf2c31be9f1ba9dc7f49d469d165eed46a46f2a31e451592f" exitCode=0 Jan 23 13:54:34 crc kubenswrapper[4771]: I0123 13:54:34.213860 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" event={"ID":"a3bf5d7b-8d58-47f7-a92b-54ca738d3032","Type":"ContainerDied","Data":"38398ea628df335bf2c31be9f1ba9dc7f49d469d165eed46a46f2a31e451592f"} Jan 23 13:54:34 crc kubenswrapper[4771]: I0123 13:54:34.214540 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" event={"ID":"a3bf5d7b-8d58-47f7-a92b-54ca738d3032","Type":"ContainerStarted","Data":"9296aac67a487254c35473d209f7f53b087983d131f6295adb5f6d0b2a4a4515"} Jan 23 13:54:34 crc kubenswrapper[4771]: I0123 13:54:34.607140 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.189617 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.190218 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-central-agent" containerID="cri-o://859d79b828dbf7e9daaa88ef38ed160053969a50e469c2299c23907fba71922d" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.190343 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-notification-agent" containerID="cri-o://94d0bb47ad9e6559e7f42169442801d1066ac95724b506586577a8f023776155" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.190316 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="sg-core" containerID="cri-o://656cd170138c0c79d8cd8bfb79f693a7a05eef253e7d7245c473cfdb0341bccb" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.190518 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="proxy-httpd" containerID="cri-o://87d226406f3060a5893186f01c0a01237893b6d71a1105bff8164c0c403aa820" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.201760 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.225:3000/\": EOF" Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.217295 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.228990 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-log" containerID="cri-o://16ed9393b594bcba9995cbddd987d4af316280f93afecf4901d8a1ac194fb57f" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.229106 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-api" containerID="cri-o://9df6f80df8a28c5cb924f7024f6ec4d258b2d83a73b323c8ecaeecf9426f0cea" gracePeriod=30 Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.246364 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" event={"ID":"a3bf5d7b-8d58-47f7-a92b-54ca738d3032","Type":"ContainerStarted","Data":"9354f255723e7b6668d4be41c64db50c3730ebd337c26f00cbd785ae8fe0c958"} Jan 23 13:54:35 crc kubenswrapper[4771]: I0123 13:54:35.263292 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" podStartSLOduration=3.263264765 podStartE2EDuration="3.263264765s" podCreationTimestamp="2026-01-23 13:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:35.258141203 +0000 UTC m=+1316.280678838" watchObservedRunningTime="2026-01-23 13:54:35.263264765 +0000 UTC m=+1316.285802390" Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.244780 4771 generic.go:334] "Generic (PLEG): container finished" podID="8904af9e-4353-4861-8106-2ae4075fafeb" containerID="16ed9393b594bcba9995cbddd987d4af316280f93afecf4901d8a1ac194fb57f" exitCode=143 Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.244902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerDied","Data":"16ed9393b594bcba9995cbddd987d4af316280f93afecf4901d8a1ac194fb57f"} Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.250267 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerID="87d226406f3060a5893186f01c0a01237893b6d71a1105bff8164c0c403aa820" exitCode=0 Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.250315 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerID="656cd170138c0c79d8cd8bfb79f693a7a05eef253e7d7245c473cfdb0341bccb" exitCode=2 Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.250333 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerID="859d79b828dbf7e9daaa88ef38ed160053969a50e469c2299c23907fba71922d" exitCode=0 Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.251874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerDied","Data":"87d226406f3060a5893186f01c0a01237893b6d71a1105bff8164c0c403aa820"} Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.251924 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerDied","Data":"656cd170138c0c79d8cd8bfb79f693a7a05eef253e7d7245c473cfdb0341bccb"} Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.251951 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:36 crc kubenswrapper[4771]: I0123 13:54:36.251972 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerDied","Data":"859d79b828dbf7e9daaa88ef38ed160053969a50e469c2299c23907fba71922d"} Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.343015 4771 generic.go:334] "Generic (PLEG): container finished" podID="8904af9e-4353-4861-8106-2ae4075fafeb" containerID="9df6f80df8a28c5cb924f7024f6ec4d258b2d83a73b323c8ecaeecf9426f0cea" exitCode=0 Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.345127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerDied","Data":"9df6f80df8a28c5cb924f7024f6ec4d258b2d83a73b323c8ecaeecf9426f0cea"} Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.492955 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.604520 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle\") pod \"8904af9e-4353-4861-8106-2ae4075fafeb\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.604649 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data\") pod \"8904af9e-4353-4861-8106-2ae4075fafeb\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.604805 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs\") pod \"8904af9e-4353-4861-8106-2ae4075fafeb\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.605018 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrbs\" (UniqueName: \"kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs\") pod \"8904af9e-4353-4861-8106-2ae4075fafeb\" (UID: \"8904af9e-4353-4861-8106-2ae4075fafeb\") " Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.605526 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs" (OuterVolumeSpecName: "logs") pod "8904af9e-4353-4861-8106-2ae4075fafeb" (UID: "8904af9e-4353-4861-8106-2ae4075fafeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.605904 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8904af9e-4353-4861-8106-2ae4075fafeb-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.618026 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs" (OuterVolumeSpecName: "kube-api-access-xhrbs") pod "8904af9e-4353-4861-8106-2ae4075fafeb" (UID: "8904af9e-4353-4861-8106-2ae4075fafeb"). InnerVolumeSpecName "kube-api-access-xhrbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.653061 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8904af9e-4353-4861-8106-2ae4075fafeb" (UID: "8904af9e-4353-4861-8106-2ae4075fafeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.654700 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data" (OuterVolumeSpecName: "config-data") pod "8904af9e-4353-4861-8106-2ae4075fafeb" (UID: "8904af9e-4353-4861-8106-2ae4075fafeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.708599 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhrbs\" (UniqueName: \"kubernetes.io/projected/8904af9e-4353-4861-8106-2ae4075fafeb-kube-api-access-xhrbs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.708645 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:37 crc kubenswrapper[4771]: I0123 13:54:37.708657 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8904af9e-4353-4861-8106-2ae4075fafeb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.361648 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8904af9e-4353-4861-8106-2ae4075fafeb","Type":"ContainerDied","Data":"5f40805d6f417a039a318bb56f7fa61ed404267f85f8800ebcbb9dda1f9aa34b"} Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.361722 4771 scope.go:117] "RemoveContainer" containerID="9df6f80df8a28c5cb924f7024f6ec4d258b2d83a73b323c8ecaeecf9426f0cea" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.361744 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.396281 4771 scope.go:117] "RemoveContainer" containerID="16ed9393b594bcba9995cbddd987d4af316280f93afecf4901d8a1ac194fb57f" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.408709 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.419544 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.440271 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:38 crc kubenswrapper[4771]: E0123 13:54:38.442126 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-log" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.442153 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-log" Jan 23 13:54:38 crc kubenswrapper[4771]: E0123 13:54:38.442179 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-api" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.442190 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-api" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.442404 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-api" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.442675 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" containerName="nova-api-log" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.443883 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.447923 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.452819 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.457477 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.472754 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631104 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631179 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631286 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8w9n\" (UniqueName: \"kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.631358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.733912 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.733986 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8w9n\" (UniqueName: \"kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.734063 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.734129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.734161 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.734181 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.734395 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.742746 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.742892 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.743533 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.748726 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.759361 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8w9n\" (UniqueName: \"kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n\") pod \"nova-api-0\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " pod="openstack/nova-api-0" Jan 23 13:54:38 crc kubenswrapper[4771]: I0123 13:54:38.775871 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:39 crc kubenswrapper[4771]: I0123 13:54:39.276682 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8904af9e-4353-4861-8106-2ae4075fafeb" path="/var/lib/kubelet/pods/8904af9e-4353-4861-8106-2ae4075fafeb/volumes" Jan 23 13:54:39 crc kubenswrapper[4771]: I0123 13:54:39.318767 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:39 crc kubenswrapper[4771]: I0123 13:54:39.388035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerStarted","Data":"4818c939e6e0ae62b4e5a67b405acc8462cc60ea179b14edb39d6b848fff9baa"} Jan 23 13:54:39 crc kubenswrapper[4771]: I0123 13:54:39.606972 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:39 crc kubenswrapper[4771]: I0123 13:54:39.635729 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.420970 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerStarted","Data":"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a"} Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.421878 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerStarted","Data":"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762"} Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.437095 4771 generic.go:334] "Generic (PLEG): container finished" podID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerID="94d0bb47ad9e6559e7f42169442801d1066ac95724b506586577a8f023776155" exitCode=0 Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.437394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerDied","Data":"94d0bb47ad9e6559e7f42169442801d1066ac95724b506586577a8f023776155"} Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.437485 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b82830b-998a-4c09-81fe-d34ae7b13f36","Type":"ContainerDied","Data":"5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff"} Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.437506 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5320f9c637fc70e89ee2688334ad664a16d8061f8bdc344ccb8fdb5ceaa289ff" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.473373 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.473338069 podStartE2EDuration="2.473338069s" podCreationTimestamp="2026-01-23 13:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:40.468131985 +0000 UTC m=+1321.490669630" watchObservedRunningTime="2026-01-23 13:54:40.473338069 +0000 UTC m=+1321.495875694" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.516741 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.533216 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.689846 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.690743 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.690973 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.691184 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.691741 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.692291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.692430 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.692541 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84xj4\" (UniqueName: \"kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.692587 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.692624 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle\") pod \"5b82830b-998a-4c09-81fe-d34ae7b13f36\" (UID: \"5b82830b-998a-4c09-81fe-d34ae7b13f36\") " Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.694842 4771 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.694862 4771 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b82830b-998a-4c09-81fe-d34ae7b13f36-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.704900 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts" (OuterVolumeSpecName: "scripts") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.705376 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4" (OuterVolumeSpecName: "kube-api-access-84xj4") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "kube-api-access-84xj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.794673 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.797578 4771 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.797614 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.797624 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84xj4\" (UniqueName: \"kubernetes.io/projected/5b82830b-998a-4c09-81fe-d34ae7b13f36-kube-api-access-84xj4\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.875776 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.889966 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.903134 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.903178 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:40 crc kubenswrapper[4771]: I0123 13:54:40.988379 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data" (OuterVolumeSpecName: "config-data") pod "5b82830b-998a-4c09-81fe-d34ae7b13f36" (UID: "5b82830b-998a-4c09-81fe-d34ae7b13f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.005182 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b82830b-998a-4c09-81fe-d34ae7b13f36-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.008021 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6r22x"] Jan 23 13:54:41 crc kubenswrapper[4771]: E0123 13:54:41.008640 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="proxy-httpd" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.008668 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="proxy-httpd" Jan 23 13:54:41 crc kubenswrapper[4771]: E0123 13:54:41.008691 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-central-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.008704 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-central-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: E0123 13:54:41.008742 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="sg-core" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.008751 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="sg-core" Jan 23 13:54:41 crc kubenswrapper[4771]: E0123 13:54:41.008767 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-notification-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.008776 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-notification-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.009023 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-notification-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.009056 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="sg-core" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.009074 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="ceilometer-central-agent" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.009087 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" containerName="proxy-httpd" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.009948 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.022239 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.022732 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.026598 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r22x"] Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.107157 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvhp\" (UniqueName: \"kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.107249 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.107329 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.107361 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.210630 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wvhp\" (UniqueName: \"kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.211311 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.212112 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.212208 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.216368 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.216741 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.222939 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.247151 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wvhp\" (UniqueName: \"kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp\") pod \"nova-cell1-cell-mapping-6r22x\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.370998 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.451084 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.488671 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.502423 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.520665 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.578186 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.578556 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.590526 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.591900 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.593740 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.737628 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-log-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.737763 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.737799 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln4vb\" (UniqueName: \"kubernetes.io/projected/91224b19-1b86-4154-ab78-a1004a2f9c0d-kube-api-access-ln4vb\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.737868 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.737987 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-run-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.738032 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-scripts\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.738054 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-config-data\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.738074 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840287 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840749 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln4vb\" (UniqueName: \"kubernetes.io/projected/91224b19-1b86-4154-ab78-a1004a2f9c0d-kube-api-access-ln4vb\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840829 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840926 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-run-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840970 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-scripts\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.840996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-config-data\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.841026 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.841067 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-log-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.841877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-log-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.841961 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91224b19-1b86-4154-ab78-a1004a2f9c0d-run-httpd\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.851297 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-config-data\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.852054 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.852235 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.855097 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.856646 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91224b19-1b86-4154-ab78-a1004a2f9c0d-scripts\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.866381 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln4vb\" (UniqueName: \"kubernetes.io/projected/91224b19-1b86-4154-ab78-a1004a2f9c0d-kube-api-access-ln4vb\") pod \"ceilometer-0\" (UID: \"91224b19-1b86-4154-ab78-a1004a2f9c0d\") " pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.939355 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:54:41 crc kubenswrapper[4771]: I0123 13:54:41.970199 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r22x"] Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.462681 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r22x" event={"ID":"42b71cd5-6305-4222-ad5f-7c8899419c5f","Type":"ContainerStarted","Data":"fc468bdb99b3ca2be9b2472b3959694668010762c839d572a1ebf7a548fc4797"} Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.463112 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r22x" event={"ID":"42b71cd5-6305-4222-ad5f-7c8899419c5f","Type":"ContainerStarted","Data":"8252bb735f0b9c3609c3c77b0b7960b51087a54cec686559ec527491619f77aa"} Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.499690 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6r22x" podStartSLOduration=2.499657527 podStartE2EDuration="2.499657527s" podCreationTimestamp="2026-01-23 13:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:42.486832961 +0000 UTC m=+1323.509370586" watchObservedRunningTime="2026-01-23 13:54:42.499657527 +0000 UTC m=+1323.522195152" Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.609238 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:54:42 crc kubenswrapper[4771]: W0123 13:54:42.611363 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91224b19_1b86_4154_ab78_a1004a2f9c0d.slice/crio-9fdaf0a9e0ab91022d5bb6d37412ac21d06cd2b4b149f6af80a4e5a179fb0b00 WatchSource:0}: Error finding container 9fdaf0a9e0ab91022d5bb6d37412ac21d06cd2b4b149f6af80a4e5a179fb0b00: Status 404 returned error can't find the container with id 9fdaf0a9e0ab91022d5bb6d37412ac21d06cd2b4b149f6af80a4e5a179fb0b00 Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.772521 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.865193 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:54:42 crc kubenswrapper[4771]: I0123 13:54:42.865575 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="dnsmasq-dns" containerID="cri-o://7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d" gracePeriod=10 Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.245256 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b82830b-998a-4c09-81fe-d34ae7b13f36" path="/var/lib/kubelet/pods/5b82830b-998a-4c09-81fe-d34ae7b13f36/volumes" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.425875 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.515724 4771 generic.go:334] "Generic (PLEG): container finished" podID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerID="7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d" exitCode=0 Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.516256 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" event={"ID":"968095d0-4c3b-4224-837b-b7a36dfb530a","Type":"ContainerDied","Data":"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d"} Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.516290 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" event={"ID":"968095d0-4c3b-4224-837b-b7a36dfb530a","Type":"ContainerDied","Data":"d25a375ef8381c448ed05f84be6ced4fc6ca1e031a00d4b26e699a5607ee8975"} Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.516310 4771 scope.go:117] "RemoveContainer" containerID="7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.516498 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.532573 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91224b19-1b86-4154-ab78-a1004a2f9c0d","Type":"ContainerStarted","Data":"4c161f7ad57834c88b6d795c9fc70d645168ab39b9475ad830116d762309170f"} Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.532630 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91224b19-1b86-4154-ab78-a1004a2f9c0d","Type":"ContainerStarted","Data":"9532cad6b0e1ee314cb6192695346a4408958529c842a0557f0952403a273b41"} Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.532640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91224b19-1b86-4154-ab78-a1004a2f9c0d","Type":"ContainerStarted","Data":"9fdaf0a9e0ab91022d5bb6d37412ac21d06cd2b4b149f6af80a4e5a179fb0b00"} Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595210 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595240 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rk2j\" (UniqueName: \"kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595272 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.595329 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc\") pod \"968095d0-4c3b-4224-837b-b7a36dfb530a\" (UID: \"968095d0-4c3b-4224-837b-b7a36dfb530a\") " Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.621278 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j" (OuterVolumeSpecName: "kube-api-access-2rk2j") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "kube-api-access-2rk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.703483 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rk2j\" (UniqueName: \"kubernetes.io/projected/968095d0-4c3b-4224-837b-b7a36dfb530a-kube-api-access-2rk2j\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.708710 4771 scope.go:117] "RemoveContainer" containerID="382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.716075 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.717079 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config" (OuterVolumeSpecName: "config") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.765321 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.796546 4771 scope.go:117] "RemoveContainer" containerID="7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.796933 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: E0123 13:54:43.797301 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d\": container with ID starting with 7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d not found: ID does not exist" containerID="7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.797332 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d"} err="failed to get container status \"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d\": rpc error: code = NotFound desc = could not find container \"7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d\": container with ID starting with 7328e71570181de46f81e03948ff40f3b75ed0158d18efd1e1da8131872a5a8d not found: ID does not exist" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.797354 4771 scope.go:117] "RemoveContainer" containerID="382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb" Jan 23 13:54:43 crc kubenswrapper[4771]: E0123 13:54:43.801506 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb\": container with ID starting with 382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb not found: ID does not exist" containerID="382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.801542 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb"} err="failed to get container status \"382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb\": rpc error: code = NotFound desc = could not find container \"382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb\": container with ID starting with 382579ac6d82f86c6fb3d862b61cc8a252dcdabd09360d08f23859428e2037cb not found: ID does not exist" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.811994 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.812035 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.812047 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.812057 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.862983 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "968095d0-4c3b-4224-837b-b7a36dfb530a" (UID: "968095d0-4c3b-4224-837b-b7a36dfb530a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:54:43 crc kubenswrapper[4771]: I0123 13:54:43.914043 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/968095d0-4c3b-4224-837b-b7a36dfb530a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:44 crc kubenswrapper[4771]: I0123 13:54:44.157822 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:54:44 crc kubenswrapper[4771]: I0123 13:54:44.170975 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c69974895-gdz7g"] Jan 23 13:54:44 crc kubenswrapper[4771]: I0123 13:54:44.551944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91224b19-1b86-4154-ab78-a1004a2f9c0d","Type":"ContainerStarted","Data":"2e7aae09f81e6f31f64fe2dca8dd2aa4b247eabe0ccc83f1514069f364497a03"} Jan 23 13:54:45 crc kubenswrapper[4771]: I0123 13:54:45.241401 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" path="/var/lib/kubelet/pods/968095d0-4c3b-4224-837b-b7a36dfb530a/volumes" Jan 23 13:54:46 crc kubenswrapper[4771]: I0123 13:54:46.600779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91224b19-1b86-4154-ab78-a1004a2f9c0d","Type":"ContainerStarted","Data":"ff8dc71df1b855223b55d554262d01ab880bccfd4e04dff19594b2c15aa8e17e"} Jan 23 13:54:46 crc kubenswrapper[4771]: I0123 13:54:46.604053 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:54:48 crc kubenswrapper[4771]: I0123 13:54:48.244092 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c69974895-gdz7g" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.217:5353: i/o timeout" Jan 23 13:54:48 crc kubenswrapper[4771]: I0123 13:54:48.777419 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:48 crc kubenswrapper[4771]: I0123 13:54:48.779878 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:54:49 crc kubenswrapper[4771]: I0123 13:54:49.672803 4771 generic.go:334] "Generic (PLEG): container finished" podID="42b71cd5-6305-4222-ad5f-7c8899419c5f" containerID="fc468bdb99b3ca2be9b2472b3959694668010762c839d572a1ebf7a548fc4797" exitCode=0 Jan 23 13:54:49 crc kubenswrapper[4771]: I0123 13:54:49.672933 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r22x" event={"ID":"42b71cd5-6305-4222-ad5f-7c8899419c5f","Type":"ContainerDied","Data":"fc468bdb99b3ca2be9b2472b3959694668010762c839d572a1ebf7a548fc4797"} Jan 23 13:54:49 crc kubenswrapper[4771]: I0123 13:54:49.697040 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.066591632 podStartE2EDuration="8.697010943s" podCreationTimestamp="2026-01-23 13:54:41 +0000 UTC" firstStartedPulling="2026-01-23 13:54:42.614820435 +0000 UTC m=+1323.637358060" lastFinishedPulling="2026-01-23 13:54:45.245239746 +0000 UTC m=+1326.267777371" observedRunningTime="2026-01-23 13:54:46.641961743 +0000 UTC m=+1327.664499368" watchObservedRunningTime="2026-01-23 13:54:49.697010943 +0000 UTC m=+1330.719548578" Jan 23 13:54:49 crc kubenswrapper[4771]: I0123 13:54:49.788663 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:49 crc kubenswrapper[4771]: I0123 13:54:49.788689 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.119268 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.205235 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle\") pod \"42b71cd5-6305-4222-ad5f-7c8899419c5f\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.205834 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data\") pod \"42b71cd5-6305-4222-ad5f-7c8899419c5f\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.205887 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts\") pod \"42b71cd5-6305-4222-ad5f-7c8899419c5f\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.205942 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wvhp\" (UniqueName: \"kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp\") pod \"42b71cd5-6305-4222-ad5f-7c8899419c5f\" (UID: \"42b71cd5-6305-4222-ad5f-7c8899419c5f\") " Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.219212 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts" (OuterVolumeSpecName: "scripts") pod "42b71cd5-6305-4222-ad5f-7c8899419c5f" (UID: "42b71cd5-6305-4222-ad5f-7c8899419c5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.219484 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp" (OuterVolumeSpecName: "kube-api-access-7wvhp") pod "42b71cd5-6305-4222-ad5f-7c8899419c5f" (UID: "42b71cd5-6305-4222-ad5f-7c8899419c5f"). InnerVolumeSpecName "kube-api-access-7wvhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.249309 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data" (OuterVolumeSpecName: "config-data") pod "42b71cd5-6305-4222-ad5f-7c8899419c5f" (UID: "42b71cd5-6305-4222-ad5f-7c8899419c5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.300642 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42b71cd5-6305-4222-ad5f-7c8899419c5f" (UID: "42b71cd5-6305-4222-ad5f-7c8899419c5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.309720 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.309783 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.309797 4771 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42b71cd5-6305-4222-ad5f-7c8899419c5f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.309809 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wvhp\" (UniqueName: \"kubernetes.io/projected/42b71cd5-6305-4222-ad5f-7c8899419c5f-kube-api-access-7wvhp\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.705890 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6r22x" event={"ID":"42b71cd5-6305-4222-ad5f-7c8899419c5f","Type":"ContainerDied","Data":"8252bb735f0b9c3609c3c77b0b7960b51087a54cec686559ec527491619f77aa"} Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.705950 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8252bb735f0b9c3609c3c77b0b7960b51087a54cec686559ec527491619f77aa" Jan 23 13:54:51 crc kubenswrapper[4771]: I0123 13:54:51.705987 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6r22x" Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.074707 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.075595 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" containerID="cri-o://325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" gracePeriod=30 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.088447 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.088789 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-log" containerID="cri-o://2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762" gracePeriod=30 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.088997 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-api" containerID="cri-o://2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a" gracePeriod=30 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.160424 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.160752 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" containerID="cri-o://7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2" gracePeriod=30 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.161186 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" containerID="cri-o://48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826" gracePeriod=30 Jan 23 13:54:52 crc kubenswrapper[4771]: E0123 13:54:52.259613 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:52 crc kubenswrapper[4771]: E0123 13:54:52.262177 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:52 crc kubenswrapper[4771]: E0123 13:54:52.263538 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:52 crc kubenswrapper[4771]: E0123 13:54:52.263580 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.737852 4771 generic.go:334] "Generic (PLEG): container finished" podID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerID="7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2" exitCode=143 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.737946 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerDied","Data":"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2"} Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.749737 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e444027-01dc-49ba-bb56-d365c923a91b" containerID="2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762" exitCode=143 Jan 23 13:54:52 crc kubenswrapper[4771]: I0123 13:54:52.749800 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerDied","Data":"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762"} Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.270738 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.392197 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rqm6\" (UniqueName: \"kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6\") pod \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.392283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs\") pod \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.392392 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs\") pod \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.392456 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle\") pod \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.392568 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data\") pod \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\" (UID: \"b4c92067-5c4d-4c0a-a273-e6c274bf1660\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.393306 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs" (OuterVolumeSpecName: "logs") pod "b4c92067-5c4d-4c0a-a273-e6c274bf1660" (UID: "b4c92067-5c4d-4c0a-a273-e6c274bf1660"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.397700 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c92067-5c4d-4c0a-a273-e6c274bf1660-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.403022 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6" (OuterVolumeSpecName: "kube-api-access-2rqm6") pod "b4c92067-5c4d-4c0a-a273-e6c274bf1660" (UID: "b4c92067-5c4d-4c0a-a273-e6c274bf1660"). InnerVolumeSpecName "kube-api-access-2rqm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.444008 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c92067-5c4d-4c0a-a273-e6c274bf1660" (UID: "b4c92067-5c4d-4c0a-a273-e6c274bf1660"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.473862 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data" (OuterVolumeSpecName: "config-data") pod "b4c92067-5c4d-4c0a-a273-e6c274bf1660" (UID: "b4c92067-5c4d-4c0a-a273-e6c274bf1660"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.481520 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.497950 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b4c92067-5c4d-4c0a-a273-e6c274bf1660" (UID: "b4c92067-5c4d-4c0a-a273-e6c274bf1660"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.499763 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.499814 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rqm6\" (UniqueName: \"kubernetes.io/projected/b4c92067-5c4d-4c0a-a273-e6c274bf1660-kube-api-access-2rqm6\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.499828 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.499838 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c92067-5c4d-4c0a-a273-e6c274bf1660-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.601103 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8w9n\" (UniqueName: \"kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.601359 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.601488 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.601530 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.601621 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.602258 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs\") pod \"8e444027-01dc-49ba-bb56-d365c923a91b\" (UID: \"8e444027-01dc-49ba-bb56-d365c923a91b\") " Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.603254 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs" (OuterVolumeSpecName: "logs") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.604736 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n" (OuterVolumeSpecName: "kube-api-access-p8w9n") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "kube-api-access-p8w9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.638782 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.653318 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data" (OuterVolumeSpecName: "config-data") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.677775 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.693824 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e444027-01dc-49ba-bb56-d365c923a91b" (UID: "8e444027-01dc-49ba-bb56-d365c923a91b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704532 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704703 4771 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704763 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704815 4771 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e444027-01dc-49ba-bb56-d365c923a91b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704868 4771 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e444027-01dc-49ba-bb56-d365c923a91b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.704939 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8w9n\" (UniqueName: \"kubernetes.io/projected/8e444027-01dc-49ba-bb56-d365c923a91b-kube-api-access-p8w9n\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.774813 4771 generic.go:334] "Generic (PLEG): container finished" podID="8e444027-01dc-49ba-bb56-d365c923a91b" containerID="2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a" exitCode=0 Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.774896 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerDied","Data":"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a"} Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.774930 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e444027-01dc-49ba-bb56-d365c923a91b","Type":"ContainerDied","Data":"4818c939e6e0ae62b4e5a67b405acc8462cc60ea179b14edb39d6b848fff9baa"} Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.774949 4771 scope.go:117] "RemoveContainer" containerID="2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.775118 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.782610 4771 generic.go:334] "Generic (PLEG): container finished" podID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerID="48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826" exitCode=0 Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.782646 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerDied","Data":"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826"} Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.782669 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c92067-5c4d-4c0a-a273-e6c274bf1660","Type":"ContainerDied","Data":"f0cd5427e054d7d4d7ba224079a63521f8569a359ea9d86e0a0038c8adca2688"} Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.782730 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.876173 4771 scope.go:117] "RemoveContainer" containerID="2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.905052 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.936607 4771 scope.go:117] "RemoveContainer" containerID="2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.937642 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a\": container with ID starting with 2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a not found: ID does not exist" containerID="2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.937687 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a"} err="failed to get container status \"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a\": rpc error: code = NotFound desc = could not find container \"2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a\": container with ID starting with 2c03c19f6519d5cc9fae6ede7a023639d537f212797f422f9d8dd67454df0c7a not found: ID does not exist" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.937747 4771 scope.go:117] "RemoveContainer" containerID="2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.939546 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762\": container with ID starting with 2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762 not found: ID does not exist" containerID="2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.939610 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762"} err="failed to get container status \"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762\": rpc error: code = NotFound desc = could not find container \"2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762\": container with ID starting with 2681352a0d242a315f0546571fdfab4a70a843e2f18a2cf819876470409b8762 not found: ID does not exist" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.939641 4771 scope.go:117] "RemoveContainer" containerID="48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.948630 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.962815 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.977176 4771 scope.go:117] "RemoveContainer" containerID="7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.977235 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.993650 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994226 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994240 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994265 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="dnsmasq-dns" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994271 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="dnsmasq-dns" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994285 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="init" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994291 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="init" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994312 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b71cd5-6305-4222-ad5f-7c8899419c5f" containerName="nova-manage" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994318 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b71cd5-6305-4222-ad5f-7c8899419c5f" containerName="nova-manage" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994329 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-api" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994335 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-api" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994352 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-log" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994359 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-log" Jan 23 13:54:54 crc kubenswrapper[4771]: E0123 13:54:54.994369 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994377 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994616 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-log" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994636 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" containerName="nova-api-api" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994654 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b71cd5-6305-4222-ad5f-7c8899419c5f" containerName="nova-manage" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994670 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994682 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.994695 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="968095d0-4c3b-4224-837b-b7a36dfb530a" containerName="dnsmasq-dns" Jan 23 13:54:54 crc kubenswrapper[4771]: I0123 13:54:54.996062 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.001650 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.001786 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.008687 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.027034 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.028105 4771 scope.go:117] "RemoveContainer" containerID="48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826" Jan 23 13:54:55 crc kubenswrapper[4771]: E0123 13:54:55.028792 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826\": container with ID starting with 48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826 not found: ID does not exist" containerID="48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.028861 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826"} err="failed to get container status \"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826\": rpc error: code = NotFound desc = could not find container \"48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826\": container with ID starting with 48d6bd58b3a9d751f7c7d340109c0055f5a814c6f263dfdb8d7539110a4aa826 not found: ID does not exist" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.028897 4771 scope.go:117] "RemoveContainer" containerID="7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2" Jan 23 13:54:55 crc kubenswrapper[4771]: E0123 13:54:55.029360 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2\": container with ID starting with 7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2 not found: ID does not exist" containerID="7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.029396 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2"} err="failed to get container status \"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2\": rpc error: code = NotFound desc = could not find container \"7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2\": container with ID starting with 7c9fcfcef86619349e434f81f9fc6608ab08c5f26d28120226ff2a5cf16a68f2 not found: ID does not exist" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.030136 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.032948 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.033328 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.033378 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.038809 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115157 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115225 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-public-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115320 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsj6\" (UniqueName: \"kubernetes.io/projected/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-kube-api-access-8hsj6\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115368 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/927c2f44-e120-4c3a-8871-83f28acd42bb-logs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115423 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rqz\" (UniqueName: \"kubernetes.io/projected/927c2f44-e120-4c3a-8871-83f28acd42bb-kube-api-access-d7rqz\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115462 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-config-data\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115537 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-logs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115576 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-config-data\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115614 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115666 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.115769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218587 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218671 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218760 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218793 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-public-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218847 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsj6\" (UniqueName: \"kubernetes.io/projected/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-kube-api-access-8hsj6\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218875 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/927c2f44-e120-4c3a-8871-83f28acd42bb-logs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rqz\" (UniqueName: \"kubernetes.io/projected/927c2f44-e120-4c3a-8871-83f28acd42bb-kube-api-access-d7rqz\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218940 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-config-data\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218969 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-logs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.218992 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-config-data\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.225608 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/927c2f44-e120-4c3a-8871-83f28acd42bb-logs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.229072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-logs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.229783 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-config-data\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.230170 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.230172 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/927c2f44-e120-4c3a-8871-83f28acd42bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.232222 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.233048 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-public-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.233572 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-config-data\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.235758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.259968 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsj6\" (UniqueName: \"kubernetes.io/projected/29cef0f5-6afd-4a1f-af4a-df21e4e9336a-kube-api-access-8hsj6\") pod \"nova-api-0\" (UID: \"29cef0f5-6afd-4a1f-af4a-df21e4e9336a\") " pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.260232 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rqz\" (UniqueName: \"kubernetes.io/projected/927c2f44-e120-4c3a-8871-83f28acd42bb-kube-api-access-d7rqz\") pod \"nova-metadata-0\" (UID: \"927c2f44-e120-4c3a-8871-83f28acd42bb\") " pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.263279 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e444027-01dc-49ba-bb56-d365c923a91b" path="/var/lib/kubelet/pods/8e444027-01dc-49ba-bb56-d365c923a91b/volumes" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.264530 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" path="/var/lib/kubelet/pods/b4c92067-5c4d-4c0a-a273-e6c274bf1660/volumes" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.329126 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.359046 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 13:54:55 crc kubenswrapper[4771]: I0123 13:54:55.847495 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.018510 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 13:54:56 crc kubenswrapper[4771]: W0123 13:54:56.018834 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29cef0f5_6afd_4a1f_af4a_df21e4e9336a.slice/crio-2932121a56f2f31cd15ec68cf13ef3e9ddb013989f42664869bee6d2da167297 WatchSource:0}: Error finding container 2932121a56f2f31cd15ec68cf13ef3e9ddb013989f42664869bee6d2da167297: Status 404 returned error can't find the container with id 2932121a56f2f31cd15ec68cf13ef3e9ddb013989f42664869bee6d2da167297 Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.810686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29cef0f5-6afd-4a1f-af4a-df21e4e9336a","Type":"ContainerStarted","Data":"89d249e98fc4ebd5bff3e885177fdd9ed84d8ce2cf6131c81d04c1e92377649b"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.811710 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29cef0f5-6afd-4a1f-af4a-df21e4e9336a","Type":"ContainerStarted","Data":"8ae9a578525e18c9646478618542e64cfa02fb6fb90359de39893ed80e3aac07"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.811734 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"29cef0f5-6afd-4a1f-af4a-df21e4e9336a","Type":"ContainerStarted","Data":"2932121a56f2f31cd15ec68cf13ef3e9ddb013989f42664869bee6d2da167297"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.814186 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"927c2f44-e120-4c3a-8871-83f28acd42bb","Type":"ContainerStarted","Data":"b6ffb1af2dba7c4a21a411b3a7774c2082d871376dfe03308aacdcc2ef54f1e0"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.814296 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"927c2f44-e120-4c3a-8871-83f28acd42bb","Type":"ContainerStarted","Data":"7d6316d559fefca0212d0bf70872a5f9330b9a6fc843d9aef39daa93c7b2d8c0"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.814320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"927c2f44-e120-4c3a-8871-83f28acd42bb","Type":"ContainerStarted","Data":"8e4f42daf0ca743d901b7b66a3410e529b5829dbd6cea61639a180160f716e12"} Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.839358 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.839320425 podStartE2EDuration="2.839320425s" podCreationTimestamp="2026-01-23 13:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:56.837587301 +0000 UTC m=+1337.860124966" watchObservedRunningTime="2026-01-23 13:54:56.839320425 +0000 UTC m=+1337.861858060" Jan 23 13:54:56 crc kubenswrapper[4771]: I0123 13:54:56.876886 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.876854975 podStartE2EDuration="2.876854975s" podCreationTimestamp="2026-01-23 13:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:54:56.865768743 +0000 UTC m=+1337.888306368" watchObservedRunningTime="2026-01-23 13:54:56.876854975 +0000 UTC m=+1337.899392600" Jan 23 13:54:57 crc kubenswrapper[4771]: E0123 13:54:57.257781 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:57 crc kubenswrapper[4771]: E0123 13:54:57.259692 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:57 crc kubenswrapper[4771]: E0123 13:54:57.261027 4771 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 13:54:57 crc kubenswrapper[4771]: E0123 13:54:57.261077 4771 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.491113 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.643778 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data\") pod \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.643869 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle\") pod \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.644262 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7h5f\" (UniqueName: \"kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f\") pod \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\" (UID: \"d99fb4dc-8562-41cc-a3a4-a4a00538ad51\") " Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.658425 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f" (OuterVolumeSpecName: "kube-api-access-h7h5f") pod "d99fb4dc-8562-41cc-a3a4-a4a00538ad51" (UID: "d99fb4dc-8562-41cc-a3a4-a4a00538ad51"). InnerVolumeSpecName "kube-api-access-h7h5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.678832 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d99fb4dc-8562-41cc-a3a4-a4a00538ad51" (UID: "d99fb4dc-8562-41cc-a3a4-a4a00538ad51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.686588 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data" (OuterVolumeSpecName: "config-data") pod "d99fb4dc-8562-41cc-a3a4-a4a00538ad51" (UID: "d99fb4dc-8562-41cc-a3a4-a4a00538ad51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.747181 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7h5f\" (UniqueName: \"kubernetes.io/projected/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-kube-api-access-h7h5f\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.747224 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.747235 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d99fb4dc-8562-41cc-a3a4-a4a00538ad51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.962111 4771 generic.go:334] "Generic (PLEG): container finished" podID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" exitCode=0 Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.962187 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d99fb4dc-8562-41cc-a3a4-a4a00538ad51","Type":"ContainerDied","Data":"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499"} Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.962221 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d99fb4dc-8562-41cc-a3a4-a4a00538ad51","Type":"ContainerDied","Data":"afff33868df1c1b082ebde144da22414e36fd2d46c20fa072b46e5d6818c687c"} Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.962212 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:58 crc kubenswrapper[4771]: I0123 13:54:58.962242 4771 scope.go:117] "RemoveContainer" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.003554 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.015073 4771 scope.go:117] "RemoveContainer" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" Jan 23 13:54:59 crc kubenswrapper[4771]: E0123 13:54:59.018133 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499\": container with ID starting with 325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499 not found: ID does not exist" containerID="325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.018191 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499"} err="failed to get container status \"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499\": rpc error: code = NotFound desc = could not find container \"325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499\": container with ID starting with 325bb9681984b34d8c0d6dade9ff99b945bb51177dda383f902e110918dd2499 not found: ID does not exist" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.031548 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.058531 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:59 crc kubenswrapper[4771]: E0123 13:54:59.059694 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.059792 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.060195 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" containerName="nova-scheduler-scheduler" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.061489 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.069125 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.099317 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.157113 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-config-data\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.157194 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.157302 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2h4\" (UniqueName: \"kubernetes.io/projected/6cbd0d04-3607-4e92-b24c-e2004269a392-kube-api-access-hm2h4\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.186817 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": context deadline exceeded" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.186830 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b4c92067-5c4d-4c0a-a273-e6c274bf1660" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.242169 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99fb4dc-8562-41cc-a3a4-a4a00538ad51" path="/var/lib/kubelet/pods/d99fb4dc-8562-41cc-a3a4-a4a00538ad51/volumes" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.259550 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.259683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm2h4\" (UniqueName: \"kubernetes.io/projected/6cbd0d04-3607-4e92-b24c-e2004269a392-kube-api-access-hm2h4\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.259807 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-config-data\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.264965 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.267838 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbd0d04-3607-4e92-b24c-e2004269a392-config-data\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.297654 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm2h4\" (UniqueName: \"kubernetes.io/projected/6cbd0d04-3607-4e92-b24c-e2004269a392-kube-api-access-hm2h4\") pod \"nova-scheduler-0\" (UID: \"6cbd0d04-3607-4e92-b24c-e2004269a392\") " pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.396150 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 13:54:59 crc kubenswrapper[4771]: W0123 13:54:59.886787 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbd0d04_3607_4e92_b24c_e2004269a392.slice/crio-36b7c17ec74cf3582e302c590e4b7cd830256f342668cb6fb0f385905bde445e WatchSource:0}: Error finding container 36b7c17ec74cf3582e302c590e4b7cd830256f342668cb6fb0f385905bde445e: Status 404 returned error can't find the container with id 36b7c17ec74cf3582e302c590e4b7cd830256f342668cb6fb0f385905bde445e Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.888498 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 13:54:59 crc kubenswrapper[4771]: I0123 13:54:59.990830 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cbd0d04-3607-4e92-b24c-e2004269a392","Type":"ContainerStarted","Data":"36b7c17ec74cf3582e302c590e4b7cd830256f342668cb6fb0f385905bde445e"} Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.311962 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.312069 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.312142 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.313468 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.313555 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d" gracePeriod=600 Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.330096 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:55:00 crc kubenswrapper[4771]: I0123 13:55:00.330187 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.007066 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cbd0d04-3607-4e92-b24c-e2004269a392","Type":"ContainerStarted","Data":"0c2652f491c25d8f03a2b11804024129b2aa16fb9c794c979cf9f2d97b6a03eb"} Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.011597 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d" exitCode=0 Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.011632 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d"} Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.011691 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40"} Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.011718 4771 scope.go:117] "RemoveContainer" containerID="dfc914e995173c379318536f5b71f7a2d9eafa2db96a43d222f1b68a93208d43" Jan 23 13:55:01 crc kubenswrapper[4771]: I0123 13:55:01.047585 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.047550849 podStartE2EDuration="2.047550849s" podCreationTimestamp="2026-01-23 13:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:55:01.03273281 +0000 UTC m=+1342.055270445" watchObservedRunningTime="2026-01-23 13:55:01.047550849 +0000 UTC m=+1342.070088484" Jan 23 13:55:04 crc kubenswrapper[4771]: I0123 13:55:04.396926 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 13:55:05 crc kubenswrapper[4771]: I0123 13:55:05.331514 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 13:55:05 crc kubenswrapper[4771]: I0123 13:55:05.332059 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 13:55:05 crc kubenswrapper[4771]: I0123 13:55:05.361663 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:55:05 crc kubenswrapper[4771]: I0123 13:55:05.361778 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 13:55:06 crc kubenswrapper[4771]: I0123 13:55:06.350611 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="927c2f44-e120-4c3a-8871-83f28acd42bb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:55:06 crc kubenswrapper[4771]: I0123 13:55:06.350618 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="927c2f44-e120-4c3a-8871-83f28acd42bb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.231:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:55:06 crc kubenswrapper[4771]: I0123 13:55:06.376694 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29cef0f5-6afd-4a1f-af4a-df21e4e9336a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:55:06 crc kubenswrapper[4771]: I0123 13:55:06.376706 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="29cef0f5-6afd-4a1f-af4a-df21e4e9336a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.232:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:55:09 crc kubenswrapper[4771]: I0123 13:55:09.397330 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 13:55:09 crc kubenswrapper[4771]: I0123 13:55:09.465310 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 13:55:10 crc kubenswrapper[4771]: I0123 13:55:10.160286 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 13:55:11 crc kubenswrapper[4771]: I0123 13:55:11.948250 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.335850 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.339100 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.346061 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.376343 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.376758 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.377149 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.377190 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.393462 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 13:55:15 crc kubenswrapper[4771]: I0123 13:55:15.397523 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 13:55:16 crc kubenswrapper[4771]: I0123 13:55:16.279226 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 13:55:25 crc kubenswrapper[4771]: I0123 13:55:25.994497 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:26 crc kubenswrapper[4771]: I0123 13:55:26.949166 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:29 crc kubenswrapper[4771]: I0123 13:55:29.785143 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" containerID="cri-o://a36f014bb98d18e4806f92bca7540402079eeed1db8d1ce47ce3b311ac3a02e2" gracePeriod=604797 Jan 23 13:55:30 crc kubenswrapper[4771]: I0123 13:55:30.606272 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" containerID="cri-o://53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55" gracePeriod=604797 Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.452309 4771 generic.go:334] "Generic (PLEG): container finished" podID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerID="a36f014bb98d18e4806f92bca7540402079eeed1db8d1ce47ce3b311ac3a02e2" exitCode=0 Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.453661 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerDied","Data":"a36f014bb98d18e4806f92bca7540402079eeed1db8d1ce47ce3b311ac3a02e2"} Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.454343 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7c3f2be4-082b-4eb5-88d6-2b069d2dd361","Type":"ContainerDied","Data":"96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c"} Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.454437 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ca02b65c8ab8aecdec02067a716d94d1061f300173eb07dcfc18edca46de3c" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.456563 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.523898 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524025 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524122 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524152 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx8qg\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524202 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524437 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524558 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.524599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data\") pod \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\" (UID: \"7c3f2be4-082b-4eb5-88d6-2b069d2dd361\") " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.525553 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.525731 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.526249 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.536989 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info" (OuterVolumeSpecName: "pod-info") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.542042 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.567388 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.569394 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.571865 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg" (OuterVolumeSpecName: "kube-api-access-gx8qg") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "kube-api-access-gx8qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631365 4771 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631466 4771 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631480 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631490 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx8qg\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-kube-api-access-gx8qg\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631502 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631513 4771 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631524 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.631570 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.691076 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.728391 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf" (OuterVolumeSpecName: "server-conf") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.730611 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data" (OuterVolumeSpecName: "config-data") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.733278 4771 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.733306 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.733316 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.786186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7c3f2be4-082b-4eb5-88d6-2b069d2dd361" (UID: "7c3f2be4-082b-4eb5-88d6-2b069d2dd361"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:31 crc kubenswrapper[4771]: I0123 13:55:31.835869 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7c3f2be4-082b-4eb5-88d6-2b069d2dd361-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.259313 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.345825 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.345920 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346077 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346206 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r47df\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346280 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346314 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346433 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346480 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346509 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346551 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.346619 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd\") pod \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\" (UID: \"205cfab6-722b-4d70-bdb7-3a12aaeea6e2\") " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.349554 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.353499 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.355953 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.357830 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.358701 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df" (OuterVolumeSpecName: "kube-api-access-r47df") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "kube-api-access-r47df". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.376862 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.384613 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info" (OuterVolumeSpecName: "pod-info") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.389122 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452221 4771 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452258 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452268 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452277 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452287 4771 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452297 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r47df\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-kube-api-access-r47df\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452319 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.452328 4771 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.511173 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf" (OuterVolumeSpecName: "server-conf") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.527852 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data" (OuterVolumeSpecName: "config-data") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.542294 4771 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.551133 4771 generic.go:334] "Generic (PLEG): container finished" podID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerID="53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55" exitCode=0 Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.551258 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.553710 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.554573 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerDied","Data":"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55"} Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.554808 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"205cfab6-722b-4d70-bdb7-3a12aaeea6e2","Type":"ContainerDied","Data":"4a6fb67f7ffd345c55595251bf883000c533b801ed6a4b26f81be5bc4069b4dc"} Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.555008 4771 scope.go:117] "RemoveContainer" containerID="53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.555808 4771 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.555829 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.555845 4771 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.654636 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "205cfab6-722b-4d70-bdb7-3a12aaeea6e2" (UID: "205cfab6-722b-4d70-bdb7-3a12aaeea6e2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.661401 4771 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/205cfab6-722b-4d70-bdb7-3a12aaeea6e2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.669501 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.693950 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.702508 4771 scope.go:117] "RemoveContainer" containerID="25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.702717 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.708073 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="setup-container" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.708113 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="setup-container" Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.708126 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="setup-container" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.708132 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="setup-container" Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.708159 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.708164 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.708198 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.708204 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.710210 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.710243 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" containerName="rabbitmq" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.750475 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.776957 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.786282 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.788752 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.789935 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.791476 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.791912 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.792462 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cpt4p" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.793778 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.800947 4771 scope.go:117] "RemoveContainer" containerID="53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55" Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.811506 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55\": container with ID starting with 53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55 not found: ID does not exist" containerID="53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.811571 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55"} err="failed to get container status \"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55\": rpc error: code = NotFound desc = could not find container \"53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55\": container with ID starting with 53aba07d6312a7fb6edb78e014647a81fa499a9905d19d954f3ae8b0d3a4ef55 not found: ID does not exist" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.811604 4771 scope.go:117] "RemoveContainer" containerID="25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20" Jan 23 13:55:32 crc kubenswrapper[4771]: E0123 13:55:32.815563 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20\": container with ID starting with 25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20 not found: ID does not exist" containerID="25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.815621 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20"} err="failed to get container status \"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20\": rpc error: code = NotFound desc = could not find container \"25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20\": container with ID starting with 25ff27ed711433686363f26cc361e4419cf442b9351ced520515076f8ea47a20 not found: ID does not exist" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.891913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12ed4577-dc9c-4535-b218-fe3580114a6f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.891996 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892028 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wwh\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-kube-api-access-j9wwh\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892074 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-config-data\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892089 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892210 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892317 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12ed4577-dc9c-4535-b218-fe3580114a6f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892364 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892503 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.892531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.903819 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.919265 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.937178 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.941815 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.956272 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.956904 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.956970 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.957188 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.957267 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8zg7b" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.957340 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.957458 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.957998 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995275 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995348 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995436 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12ed4577-dc9c-4535-b218-fe3580114a6f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995460 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995519 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995663 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995697 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14b1f3d7-6878-46af-ae81-88676519f44b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995729 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995768 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12ed4577-dc9c-4535-b218-fe3580114a6f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995844 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14b1f3d7-6878-46af-ae81-88676519f44b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995886 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995924 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wwh\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-kube-api-access-j9wwh\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.995991 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-config-data\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996016 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996040 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996230 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.996707 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997113 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997169 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtjm\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-kube-api-access-wrtjm\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997187 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997350 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997385 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.997537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12ed4577-dc9c-4535-b218-fe3580114a6f-config-data\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:32 crc kubenswrapper[4771]: I0123 13:55:32.999800 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.002181 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.003028 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12ed4577-dc9c-4535-b218-fe3580114a6f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.004214 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.005840 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12ed4577-dc9c-4535-b218-fe3580114a6f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.020704 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wwh\" (UniqueName: \"kubernetes.io/projected/12ed4577-dc9c-4535-b218-fe3580114a6f-kube-api-access-j9wwh\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.053236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"12ed4577-dc9c-4535-b218-fe3580114a6f\") " pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099465 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099536 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099568 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtjm\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-kube-api-access-wrtjm\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099590 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099610 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099645 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099689 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099722 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099742 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14b1f3d7-6878-46af-ae81-88676519f44b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099757 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.099795 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14b1f3d7-6878-46af-ae81-88676519f44b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.100852 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.101946 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.102537 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.102815 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.103289 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.104152 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/14b1f3d7-6878-46af-ae81-88676519f44b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.106462 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.106822 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/14b1f3d7-6878-46af-ae81-88676519f44b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.107318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.111814 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/14b1f3d7-6878-46af-ae81-88676519f44b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.122311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtjm\" (UniqueName: \"kubernetes.io/projected/14b1f3d7-6878-46af-ae81-88676519f44b-kube-api-access-wrtjm\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.125078 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.143677 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"14b1f3d7-6878-46af-ae81-88676519f44b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.272171 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.275276 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205cfab6-722b-4d70-bdb7-3a12aaeea6e2" path="/var/lib/kubelet/pods/205cfab6-722b-4d70-bdb7-3a12aaeea6e2/volumes" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.276182 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3f2be4-082b-4eb5-88d6-2b069d2dd361" path="/var/lib/kubelet/pods/7c3f2be4-082b-4eb5-88d6-2b069d2dd361/volumes" Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.690522 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 13:55:33 crc kubenswrapper[4771]: W0123 13:55:33.693994 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ed4577_dc9c_4535_b218_fe3580114a6f.slice/crio-baf9e3f3ebc2329edfaadcbc90fd84cc6611196304551c3080e36a5dca5f8a8a WatchSource:0}: Error finding container baf9e3f3ebc2329edfaadcbc90fd84cc6611196304551c3080e36a5dca5f8a8a: Status 404 returned error can't find the container with id baf9e3f3ebc2329edfaadcbc90fd84cc6611196304551c3080e36a5dca5f8a8a Jan 23 13:55:33 crc kubenswrapper[4771]: W0123 13:55:33.817324 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14b1f3d7_6878_46af_ae81_88676519f44b.slice/crio-49ff447617a01a9849396ea16c5b5d8b04f2eebe3b67a62fff83894e3f417415 WatchSource:0}: Error finding container 49ff447617a01a9849396ea16c5b5d8b04f2eebe3b67a62fff83894e3f417415: Status 404 returned error can't find the container with id 49ff447617a01a9849396ea16c5b5d8b04f2eebe3b67a62fff83894e3f417415 Jan 23 13:55:33 crc kubenswrapper[4771]: I0123 13:55:33.831024 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 13:55:34 crc kubenswrapper[4771]: I0123 13:55:34.585926 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14b1f3d7-6878-46af-ae81-88676519f44b","Type":"ContainerStarted","Data":"49ff447617a01a9849396ea16c5b5d8b04f2eebe3b67a62fff83894e3f417415"} Jan 23 13:55:34 crc kubenswrapper[4771]: I0123 13:55:34.587476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12ed4577-dc9c-4535-b218-fe3580114a6f","Type":"ContainerStarted","Data":"baf9e3f3ebc2329edfaadcbc90fd84cc6611196304551c3080e36a5dca5f8a8a"} Jan 23 13:55:36 crc kubenswrapper[4771]: I0123 13:55:36.618205 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12ed4577-dc9c-4535-b218-fe3580114a6f","Type":"ContainerStarted","Data":"90ffa78e8a69b9a8acd3d38d622280bc9c579f2b2416f196ca947df6564ebdd7"} Jan 23 13:55:36 crc kubenswrapper[4771]: I0123 13:55:36.622138 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14b1f3d7-6878-46af-ae81-88676519f44b","Type":"ContainerStarted","Data":"6182b90afb1d4a2e23ecf0eeb29c1ad8819fd3bb9f0e1cd11038f98963303fd9"} Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.028170 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.031466 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.037681 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.060285 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175378 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175499 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175523 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175561 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175609 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175640 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.175659 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nhdj\" (UniqueName: \"kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278033 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nhdj\" (UniqueName: \"kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278193 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278298 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278323 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.278362 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.279311 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.279357 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.279477 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.279803 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.279879 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.280236 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.301757 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nhdj\" (UniqueName: \"kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj\") pod \"dnsmasq-dns-6c7c98bb5f-htxbt\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.354378 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:40 crc kubenswrapper[4771]: I0123 13:55:40.838988 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:55:41 crc kubenswrapper[4771]: I0123 13:55:41.689939 4771 generic.go:334] "Generic (PLEG): container finished" podID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerID="9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb" exitCode=0 Jan 23 13:55:41 crc kubenswrapper[4771]: I0123 13:55:41.690203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" event={"ID":"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea","Type":"ContainerDied","Data":"9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb"} Jan 23 13:55:41 crc kubenswrapper[4771]: I0123 13:55:41.690432 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" event={"ID":"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea","Type":"ContainerStarted","Data":"a3f76ed2b2a7af37ebe859447b7c14ecbfdf69cbb90ff1a2d786b86d13d9e399"} Jan 23 13:55:42 crc kubenswrapper[4771]: I0123 13:55:42.704126 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" event={"ID":"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea","Type":"ContainerStarted","Data":"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3"} Jan 23 13:55:42 crc kubenswrapper[4771]: I0123 13:55:42.704612 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:42 crc kubenswrapper[4771]: I0123 13:55:42.725433 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" podStartSLOduration=3.725386849 podStartE2EDuration="3.725386849s" podCreationTimestamp="2026-01-23 13:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:55:42.722655092 +0000 UTC m=+1383.745192747" watchObservedRunningTime="2026-01-23 13:55:42.725386849 +0000 UTC m=+1383.747924474" Jan 23 13:55:46 crc kubenswrapper[4771]: I0123 13:55:46.710525 4771 scope.go:117] "RemoveContainer" containerID="0950f7746297950384c56a9218038f2a7ee2b7618033713147fbb195a90e8ec3" Jan 23 13:55:46 crc kubenswrapper[4771]: I0123 13:55:46.739223 4771 scope.go:117] "RemoveContainer" containerID="7d99e1c86e9e4ecf377470ad86e9ca700b7f9077628c6f89ad3b8868b41a0180" Jan 23 13:55:46 crc kubenswrapper[4771]: I0123 13:55:46.771720 4771 scope.go:117] "RemoveContainer" containerID="0689993ce178a84e157f2f38d9e360cdf7ecc3f6236d6ce2baf7b3ac37e3e1e7" Jan 23 13:55:46 crc kubenswrapper[4771]: I0123 13:55:46.846404 4771 scope.go:117] "RemoveContainer" containerID="79dcf3758397b1bbf972a6508c4dedfdd003c8dcb2dc410748fa1f1fe07f6d9b" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.356974 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.443260 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.443712 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="dnsmasq-dns" containerID="cri-o://9354f255723e7b6668d4be41c64db50c3730ebd337c26f00cbd785ae8fe0c958" gracePeriod=10 Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.613964 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cdb8b545-cwd2l"] Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.619599 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.629074 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cdb8b545-cwd2l"] Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.751985 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-config\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.752597 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5n5q\" (UniqueName: \"kubernetes.io/projected/47bf4b2c-e16d-47d8-b088-4cba3cf18643-kube-api-access-z5n5q\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.752634 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-nb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.752678 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-sb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.752801 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-svc\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.752863 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.753011 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-swift-storage-0\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.828077 4771 generic.go:334] "Generic (PLEG): container finished" podID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerID="9354f255723e7b6668d4be41c64db50c3730ebd337c26f00cbd785ae8fe0c958" exitCode=0 Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.828129 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" event={"ID":"a3bf5d7b-8d58-47f7-a92b-54ca738d3032","Type":"ContainerDied","Data":"9354f255723e7b6668d4be41c64db50c3730ebd337c26f00cbd785ae8fe0c958"} Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855193 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-swift-storage-0\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855334 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-config\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855394 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5n5q\" (UniqueName: \"kubernetes.io/projected/47bf4b2c-e16d-47d8-b088-4cba3cf18643-kube-api-access-z5n5q\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855493 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-nb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855513 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-sb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855553 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-svc\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.855587 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.856479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.856715 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-swift-storage-0\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.857109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-config\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.857486 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-nb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.857719 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-ovsdbserver-sb\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.858237 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47bf4b2c-e16d-47d8-b088-4cba3cf18643-dns-svc\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.901077 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5n5q\" (UniqueName: \"kubernetes.io/projected/47bf4b2c-e16d-47d8-b088-4cba3cf18643-kube-api-access-z5n5q\") pod \"dnsmasq-dns-67cdb8b545-cwd2l\" (UID: \"47bf4b2c-e16d-47d8-b088-4cba3cf18643\") " pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:50 crc kubenswrapper[4771]: I0123 13:55:50.948282 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.111610 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.265510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.265656 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.265858 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s89xw\" (UniqueName: \"kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.265955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.266032 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.266062 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb\") pod \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\" (UID: \"a3bf5d7b-8d58-47f7-a92b-54ca738d3032\") " Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.287669 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw" (OuterVolumeSpecName: "kube-api-access-s89xw") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "kube-api-access-s89xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.353210 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.365553 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config" (OuterVolumeSpecName: "config") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.369314 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.369364 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s89xw\" (UniqueName: \"kubernetes.io/projected/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-kube-api-access-s89xw\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.369403 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.385608 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.407823 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.420780 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a3bf5d7b-8d58-47f7-a92b-54ca738d3032" (UID: "a3bf5d7b-8d58-47f7-a92b-54ca738d3032"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.472099 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.472135 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.472148 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3bf5d7b-8d58-47f7-a92b-54ca738d3032-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.629015 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cdb8b545-cwd2l"] Jan 23 13:55:51 crc kubenswrapper[4771]: W0123 13:55:51.634529 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47bf4b2c_e16d_47d8_b088_4cba3cf18643.slice/crio-404bf4a968745af23e13909a7330a6aafd70c9de799822ccc3ba5632c9f75413 WatchSource:0}: Error finding container 404bf4a968745af23e13909a7330a6aafd70c9de799822ccc3ba5632c9f75413: Status 404 returned error can't find the container with id 404bf4a968745af23e13909a7330a6aafd70c9de799822ccc3ba5632c9f75413 Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.842317 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" event={"ID":"a3bf5d7b-8d58-47f7-a92b-54ca738d3032","Type":"ContainerDied","Data":"9296aac67a487254c35473d209f7f53b087983d131f6295adb5f6d0b2a4a4515"} Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.842359 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59687d4f97-zfhbp" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.842400 4771 scope.go:117] "RemoveContainer" containerID="9354f255723e7b6668d4be41c64db50c3730ebd337c26f00cbd785ae8fe0c958" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.844758 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" event={"ID":"47bf4b2c-e16d-47d8-b088-4cba3cf18643","Type":"ContainerStarted","Data":"404bf4a968745af23e13909a7330a6aafd70c9de799822ccc3ba5632c9f75413"} Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.907021 4771 scope.go:117] "RemoveContainer" containerID="38398ea628df335bf2c31be9f1ba9dc7f49d469d165eed46a46f2a31e451592f" Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.915124 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:55:51 crc kubenswrapper[4771]: I0123 13:55:51.931129 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59687d4f97-zfhbp"] Jan 23 13:55:52 crc kubenswrapper[4771]: I0123 13:55:52.857639 4771 generic.go:334] "Generic (PLEG): container finished" podID="47bf4b2c-e16d-47d8-b088-4cba3cf18643" containerID="549a28a889bde87b7fbd894feeef20fc94a9d12e1919c34d46121767893425c7" exitCode=0 Jan 23 13:55:52 crc kubenswrapper[4771]: I0123 13:55:52.857723 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" event={"ID":"47bf4b2c-e16d-47d8-b088-4cba3cf18643","Type":"ContainerDied","Data":"549a28a889bde87b7fbd894feeef20fc94a9d12e1919c34d46121767893425c7"} Jan 23 13:55:53 crc kubenswrapper[4771]: I0123 13:55:53.240451 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" path="/var/lib/kubelet/pods/a3bf5d7b-8d58-47f7-a92b-54ca738d3032/volumes" Jan 23 13:55:53 crc kubenswrapper[4771]: I0123 13:55:53.875780 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" event={"ID":"47bf4b2c-e16d-47d8-b088-4cba3cf18643","Type":"ContainerStarted","Data":"ba3c904669555eafb47e4e3c358744c03b8cf66cc22cc4cd37736297cf4561db"} Jan 23 13:55:53 crc kubenswrapper[4771]: I0123 13:55:53.877255 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:55:53 crc kubenswrapper[4771]: I0123 13:55:53.911883 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" podStartSLOduration=3.911849721 podStartE2EDuration="3.911849721s" podCreationTimestamp="2026-01-23 13:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:55:53.900898563 +0000 UTC m=+1394.923436198" watchObservedRunningTime="2026-01-23 13:55:53.911849721 +0000 UTC m=+1394.934387346" Jan 23 13:56:00 crc kubenswrapper[4771]: I0123 13:56:00.949779 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cdb8b545-cwd2l" Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.031623 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.031976 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="dnsmasq-dns" containerID="cri-o://ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3" gracePeriod=10 Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.810107 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954343 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954430 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954664 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954878 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954940 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nhdj\" (UniqueName: \"kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.954978 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.955010 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0\") pod \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\" (UID: \"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea\") " Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.961977 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj" (OuterVolumeSpecName: "kube-api-access-2nhdj") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "kube-api-access-2nhdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.991361 4771 generic.go:334] "Generic (PLEG): container finished" podID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerID="ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3" exitCode=0 Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.991474 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" event={"ID":"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea","Type":"ContainerDied","Data":"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3"} Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.991501 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.991521 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7c98bb5f-htxbt" event={"ID":"5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea","Type":"ContainerDied","Data":"a3f76ed2b2a7af37ebe859447b7c14ecbfdf69cbb90ff1a2d786b86d13d9e399"} Jan 23 13:56:01 crc kubenswrapper[4771]: I0123 13:56:01.991550 4771 scope.go:117] "RemoveContainer" containerID="ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.022737 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config" (OuterVolumeSpecName: "config") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.026945 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.030649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.040921 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.048507 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059715 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059759 4771 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059777 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059792 4771 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059803 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.059816 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nhdj\" (UniqueName: \"kubernetes.io/projected/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-kube-api-access-2nhdj\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.084087 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" (UID: "5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.162285 4771 scope.go:117] "RemoveContainer" containerID="9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.162479 4771 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.220123 4771 scope.go:117] "RemoveContainer" containerID="ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3" Jan 23 13:56:02 crc kubenswrapper[4771]: E0123 13:56:02.221188 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3\": container with ID starting with ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3 not found: ID does not exist" containerID="ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.221242 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3"} err="failed to get container status \"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3\": rpc error: code = NotFound desc = could not find container \"ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3\": container with ID starting with ddb5afcf0669030df7c08a5239b8222fdebabcbf213f55ae723de04a7baceda3 not found: ID does not exist" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.221281 4771 scope.go:117] "RemoveContainer" containerID="9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb" Jan 23 13:56:02 crc kubenswrapper[4771]: E0123 13:56:02.221840 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb\": container with ID starting with 9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb not found: ID does not exist" containerID="9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.221869 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb"} err="failed to get container status \"9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb\": rpc error: code = NotFound desc = could not find container \"9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb\": container with ID starting with 9e2e3cd0ff7c864fc4de173eebaf602ccc1c573e47575460b241dccf4cbd3fcb not found: ID does not exist" Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.340911 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:56:02 crc kubenswrapper[4771]: I0123 13:56:02.354058 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c7c98bb5f-htxbt"] Jan 23 13:56:03 crc kubenswrapper[4771]: I0123 13:56:03.243172 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" path="/var/lib/kubelet/pods/5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea/volumes" Jan 23 13:56:08 crc kubenswrapper[4771]: I0123 13:56:08.056189 4771 generic.go:334] "Generic (PLEG): container finished" podID="12ed4577-dc9c-4535-b218-fe3580114a6f" containerID="90ffa78e8a69b9a8acd3d38d622280bc9c579f2b2416f196ca947df6564ebdd7" exitCode=0 Jan 23 13:56:08 crc kubenswrapper[4771]: I0123 13:56:08.056290 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12ed4577-dc9c-4535-b218-fe3580114a6f","Type":"ContainerDied","Data":"90ffa78e8a69b9a8acd3d38d622280bc9c579f2b2416f196ca947df6564ebdd7"} Jan 23 13:56:08 crc kubenswrapper[4771]: I0123 13:56:08.059591 4771 generic.go:334] "Generic (PLEG): container finished" podID="14b1f3d7-6878-46af-ae81-88676519f44b" containerID="6182b90afb1d4a2e23ecf0eeb29c1ad8819fd3bb9f0e1cd11038f98963303fd9" exitCode=0 Jan 23 13:56:08 crc kubenswrapper[4771]: I0123 13:56:08.059651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14b1f3d7-6878-46af-ae81-88676519f44b","Type":"ContainerDied","Data":"6182b90afb1d4a2e23ecf0eeb29c1ad8819fd3bb9f0e1cd11038f98963303fd9"} Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.075088 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"14b1f3d7-6878-46af-ae81-88676519f44b","Type":"ContainerStarted","Data":"00bf1751ac28f3acc2a692572de9e46852d288475727363e5e71d0be0158a167"} Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.076169 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.079455 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12ed4577-dc9c-4535-b218-fe3580114a6f","Type":"ContainerStarted","Data":"cdcfbf5c3bd0e4276b66fdd02e5216621957b4797cec605bd693ce7e00519301"} Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.079730 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.131440 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.131391626 podStartE2EDuration="37.131391626s" podCreationTimestamp="2026-01-23 13:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:56:09.116090499 +0000 UTC m=+1410.138628124" watchObservedRunningTime="2026-01-23 13:56:09.131391626 +0000 UTC m=+1410.153929251" Jan 23 13:56:09 crc kubenswrapper[4771]: I0123 13:56:09.161639 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.161613616 podStartE2EDuration="37.161613616s" podCreationTimestamp="2026-01-23 13:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:56:09.14223558 +0000 UTC m=+1410.164773215" watchObservedRunningTime="2026-01-23 13:56:09.161613616 +0000 UTC m=+1410.184151241" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.601063 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f"] Jan 23 13:56:14 crc kubenswrapper[4771]: E0123 13:56:14.602337 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602356 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: E0123 13:56:14.602380 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="init" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602388 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="init" Jan 23 13:56:14 crc kubenswrapper[4771]: E0123 13:56:14.602429 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602439 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: E0123 13:56:14.602461 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="init" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602468 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="init" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602736 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8dd702-301b-4ef5-bf8f-1b7f3e8626ea" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.602757 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bf5d7b-8d58-47f7-a92b-54ca738d3032" containerName="dnsmasq-dns" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.603852 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.609737 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.610036 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.610198 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.611996 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.632092 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f"] Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.757769 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.757842 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.757992 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.758017 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9c89\" (UniqueName: \"kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.860507 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.860923 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9c89\" (UniqueName: \"kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.861190 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.861435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.869052 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.869117 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.870535 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.884439 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9c89\" (UniqueName: \"kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:14 crc kubenswrapper[4771]: I0123 13:56:14.927086 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:15 crc kubenswrapper[4771]: I0123 13:56:15.605303 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f"] Jan 23 13:56:16 crc kubenswrapper[4771]: I0123 13:56:16.166203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" event={"ID":"79fe5e81-6503-49e2-ae4d-35cc605ac5ae","Type":"ContainerStarted","Data":"6f2d325fb62938513caa59dbecb18424e5a85835cb5ade11da56dbc2265a26c3"} Jan 23 13:56:23 crc kubenswrapper[4771]: I0123 13:56:23.128158 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="12ed4577-dc9c-4535-b218-fe3580114a6f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.234:5671: connect: connection refused" Jan 23 13:56:23 crc kubenswrapper[4771]: I0123 13:56:23.276187 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="14b1f3d7-6878-46af-ae81-88676519f44b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.235:5671: connect: connection refused" Jan 23 13:56:26 crc kubenswrapper[4771]: I0123 13:56:26.305444 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" event={"ID":"79fe5e81-6503-49e2-ae4d-35cc605ac5ae","Type":"ContainerStarted","Data":"d323083a18f784ee474ceaca364fd36a87fdbd8e91eed7c26d6a7c2cab0f6f5a"} Jan 23 13:56:26 crc kubenswrapper[4771]: I0123 13:56:26.350053 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" podStartSLOduration=2.258488661 podStartE2EDuration="12.350023012s" podCreationTimestamp="2026-01-23 13:56:14 +0000 UTC" firstStartedPulling="2026-01-23 13:56:15.621133259 +0000 UTC m=+1416.643670884" lastFinishedPulling="2026-01-23 13:56:25.71266762 +0000 UTC m=+1426.735205235" observedRunningTime="2026-01-23 13:56:26.324219072 +0000 UTC m=+1427.346756717" watchObservedRunningTime="2026-01-23 13:56:26.350023012 +0000 UTC m=+1427.372560647" Jan 23 13:56:33 crc kubenswrapper[4771]: I0123 13:56:33.127691 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 13:56:33 crc kubenswrapper[4771]: I0123 13:56:33.276701 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 13:56:37 crc kubenswrapper[4771]: I0123 13:56:37.441974 4771 generic.go:334] "Generic (PLEG): container finished" podID="79fe5e81-6503-49e2-ae4d-35cc605ac5ae" containerID="d323083a18f784ee474ceaca364fd36a87fdbd8e91eed7c26d6a7c2cab0f6f5a" exitCode=0 Jan 23 13:56:37 crc kubenswrapper[4771]: I0123 13:56:37.442112 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" event={"ID":"79fe5e81-6503-49e2-ae4d-35cc605ac5ae","Type":"ContainerDied","Data":"d323083a18f784ee474ceaca364fd36a87fdbd8e91eed7c26d6a7c2cab0f6f5a"} Jan 23 13:56:38 crc kubenswrapper[4771]: I0123 13:56:38.913065 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.049233 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9c89\" (UniqueName: \"kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89\") pod \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.049325 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam\") pod \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.050291 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle\") pod \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.050470 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory\") pod \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\" (UID: \"79fe5e81-6503-49e2-ae4d-35cc605ac5ae\") " Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.056922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89" (OuterVolumeSpecName: "kube-api-access-p9c89") pod "79fe5e81-6503-49e2-ae4d-35cc605ac5ae" (UID: "79fe5e81-6503-49e2-ae4d-35cc605ac5ae"). InnerVolumeSpecName "kube-api-access-p9c89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.063694 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "79fe5e81-6503-49e2-ae4d-35cc605ac5ae" (UID: "79fe5e81-6503-49e2-ae4d-35cc605ac5ae"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.094093 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory" (OuterVolumeSpecName: "inventory") pod "79fe5e81-6503-49e2-ae4d-35cc605ac5ae" (UID: "79fe5e81-6503-49e2-ae4d-35cc605ac5ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.097630 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79fe5e81-6503-49e2-ae4d-35cc605ac5ae" (UID: "79fe5e81-6503-49e2-ae4d-35cc605ac5ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.154226 4771 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.154283 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.154299 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9c89\" (UniqueName: \"kubernetes.io/projected/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-kube-api-access-p9c89\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.154313 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79fe5e81-6503-49e2-ae4d-35cc605ac5ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.469254 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" event={"ID":"79fe5e81-6503-49e2-ae4d-35cc605ac5ae","Type":"ContainerDied","Data":"6f2d325fb62938513caa59dbecb18424e5a85835cb5ade11da56dbc2265a26c3"} Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.469713 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2d325fb62938513caa59dbecb18424e5a85835cb5ade11da56dbc2265a26c3" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.469346 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.585097 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm"] Jan 23 13:56:39 crc kubenswrapper[4771]: E0123 13:56:39.585721 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79fe5e81-6503-49e2-ae4d-35cc605ac5ae" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.585743 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="79fe5e81-6503-49e2-ae4d-35cc605ac5ae" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.586024 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="79fe5e81-6503-49e2-ae4d-35cc605ac5ae" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.586917 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.592768 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.592781 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.592889 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.592997 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.606227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm"] Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.767219 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr67x\" (UniqueName: \"kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.767373 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.768214 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.871076 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.871158 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr67x\" (UniqueName: \"kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.871238 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.877839 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.878062 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.891256 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr67x\" (UniqueName: \"kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hnztm\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:39 crc kubenswrapper[4771]: I0123 13:56:39.916758 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:40 crc kubenswrapper[4771]: W0123 13:56:40.481990 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1961816_bdc5_454b_a6e6_a21748cf812f.slice/crio-dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b WatchSource:0}: Error finding container dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b: Status 404 returned error can't find the container with id dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b Jan 23 13:56:40 crc kubenswrapper[4771]: I0123 13:56:40.488491 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm"] Jan 23 13:56:41 crc kubenswrapper[4771]: I0123 13:56:41.532788 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" event={"ID":"b1961816-bdc5-454b-a6e6-a21748cf812f","Type":"ContainerStarted","Data":"ec9eee5bfea15719cb69e4aeea717b34581114a8926b595f8cba1c3a02841f96"} Jan 23 13:56:41 crc kubenswrapper[4771]: I0123 13:56:41.533310 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" event={"ID":"b1961816-bdc5-454b-a6e6-a21748cf812f","Type":"ContainerStarted","Data":"dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b"} Jan 23 13:56:41 crc kubenswrapper[4771]: I0123 13:56:41.561801 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" podStartSLOduration=2.064278101 podStartE2EDuration="2.561774679s" podCreationTimestamp="2026-01-23 13:56:39 +0000 UTC" firstStartedPulling="2026-01-23 13:56:40.484388095 +0000 UTC m=+1441.506925720" lastFinishedPulling="2026-01-23 13:56:40.981884663 +0000 UTC m=+1442.004422298" observedRunningTime="2026-01-23 13:56:41.557084539 +0000 UTC m=+1442.579622174" watchObservedRunningTime="2026-01-23 13:56:41.561774679 +0000 UTC m=+1442.584312314" Jan 23 13:56:44 crc kubenswrapper[4771]: I0123 13:56:44.568121 4771 generic.go:334] "Generic (PLEG): container finished" podID="b1961816-bdc5-454b-a6e6-a21748cf812f" containerID="ec9eee5bfea15719cb69e4aeea717b34581114a8926b595f8cba1c3a02841f96" exitCode=0 Jan 23 13:56:44 crc kubenswrapper[4771]: I0123 13:56:44.568210 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" event={"ID":"b1961816-bdc5-454b-a6e6-a21748cf812f","Type":"ContainerDied","Data":"ec9eee5bfea15719cb69e4aeea717b34581114a8926b595f8cba1c3a02841f96"} Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.043784 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.245008 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam\") pod \"b1961816-bdc5-454b-a6e6-a21748cf812f\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.245123 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr67x\" (UniqueName: \"kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x\") pod \"b1961816-bdc5-454b-a6e6-a21748cf812f\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.245378 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory\") pod \"b1961816-bdc5-454b-a6e6-a21748cf812f\" (UID: \"b1961816-bdc5-454b-a6e6-a21748cf812f\") " Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.262816 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x" (OuterVolumeSpecName: "kube-api-access-wr67x") pod "b1961816-bdc5-454b-a6e6-a21748cf812f" (UID: "b1961816-bdc5-454b-a6e6-a21748cf812f"). InnerVolumeSpecName "kube-api-access-wr67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.278100 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1961816-bdc5-454b-a6e6-a21748cf812f" (UID: "b1961816-bdc5-454b-a6e6-a21748cf812f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.287360 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory" (OuterVolumeSpecName: "inventory") pod "b1961816-bdc5-454b-a6e6-a21748cf812f" (UID: "b1961816-bdc5-454b-a6e6-a21748cf812f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.348859 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.348919 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1961816-bdc5-454b-a6e6-a21748cf812f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.348934 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr67x\" (UniqueName: \"kubernetes.io/projected/b1961816-bdc5-454b-a6e6-a21748cf812f-kube-api-access-wr67x\") on node \"crc\" DevicePath \"\"" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.608525 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" event={"ID":"b1961816-bdc5-454b-a6e6-a21748cf812f","Type":"ContainerDied","Data":"dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b"} Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.608598 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfec511dc6bf48adcabe9c2c6a3c6c6665828ee47f0e6e4c3351916e64b9945b" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.608731 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hnztm" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.689651 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4"] Jan 23 13:56:46 crc kubenswrapper[4771]: E0123 13:56:46.690221 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1961816-bdc5-454b-a6e6-a21748cf812f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.690246 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1961816-bdc5-454b-a6e6-a21748cf812f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.690521 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1961816-bdc5-454b-a6e6-a21748cf812f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.691380 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.694307 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.694307 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.695601 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.695837 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.711153 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4"] Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.759526 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zpl\" (UniqueName: \"kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.759663 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.759950 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.760201 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.862296 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.862480 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.862594 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zpl\" (UniqueName: \"kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.862996 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.869070 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.869969 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.871847 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:46 crc kubenswrapper[4771]: I0123 13:56:46.883820 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zpl\" (UniqueName: \"kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.012641 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.029935 4771 scope.go:117] "RemoveContainer" containerID="f130420f5f38dd7ff3309ed30779e51ac68ec4c51ff6ab2a8311f32b27afd9d1" Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.093556 4771 scope.go:117] "RemoveContainer" containerID="88e8d3ab56095979828ea7c6448d978a8721a8d3bdd07f7f37ffb05bfae28a8e" Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.187172 4771 scope.go:117] "RemoveContainer" containerID="a36f014bb98d18e4806f92bca7540402079eeed1db8d1ce47ce3b311ac3a02e2" Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.229507 4771 scope.go:117] "RemoveContainer" containerID="162e65c46d4a60b1e4293bdde905464abe77cc1e8bbef0b366bb479371a11a98" Jan 23 13:56:47 crc kubenswrapper[4771]: W0123 13:56:47.677951 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30a335c9_357c_4ea4_8737_d8d795f1a05d.slice/crio-af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74 WatchSource:0}: Error finding container af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74: Status 404 returned error can't find the container with id af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74 Jan 23 13:56:47 crc kubenswrapper[4771]: I0123 13:56:47.679521 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4"] Jan 23 13:56:48 crc kubenswrapper[4771]: I0123 13:56:48.644111 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" event={"ID":"30a335c9-357c-4ea4-8737-d8d795f1a05d","Type":"ContainerStarted","Data":"204ada120862966a73f158cc82b24d0aa294b29c7de06d90d1f4ce4b7d1f6acf"} Jan 23 13:56:48 crc kubenswrapper[4771]: I0123 13:56:48.644810 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" event={"ID":"30a335c9-357c-4ea4-8737-d8d795f1a05d","Type":"ContainerStarted","Data":"af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74"} Jan 23 13:56:48 crc kubenswrapper[4771]: I0123 13:56:48.668228 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" podStartSLOduration=2.155198206 podStartE2EDuration="2.668206517s" podCreationTimestamp="2026-01-23 13:56:46 +0000 UTC" firstStartedPulling="2026-01-23 13:56:47.68198285 +0000 UTC m=+1448.704520465" lastFinishedPulling="2026-01-23 13:56:48.194991151 +0000 UTC m=+1449.217528776" observedRunningTime="2026-01-23 13:56:48.664983305 +0000 UTC m=+1449.687520970" watchObservedRunningTime="2026-01-23 13:56:48.668206517 +0000 UTC m=+1449.690744142" Jan 23 13:57:00 crc kubenswrapper[4771]: I0123 13:57:00.312363 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:57:00 crc kubenswrapper[4771]: I0123 13:57:00.313207 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:57:30 crc kubenswrapper[4771]: I0123 13:57:30.312556 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:57:30 crc kubenswrapper[4771]: I0123 13:57:30.313764 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:57:47 crc kubenswrapper[4771]: I0123 13:57:47.449085 4771 scope.go:117] "RemoveContainer" containerID="467dd22754639004b711c0bbe3901271b9bf2feadb428d6c5284bc6dfaefc164" Jan 23 13:57:47 crc kubenswrapper[4771]: I0123 13:57:47.505922 4771 scope.go:117] "RemoveContainer" containerID="cabd9998636e96e452e80ad698f9613b41fbb3d501f4173099d358be6835a6f8" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.312229 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.313063 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.313126 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.314271 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.314345 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" gracePeriod=600 Jan 23 13:58:00 crc kubenswrapper[4771]: E0123 13:58:00.442059 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.540450 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" exitCode=0 Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.540500 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40"} Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.540549 4771 scope.go:117] "RemoveContainer" containerID="57cfa0bafaf927f754bb5bd9dae0b9c910ada95388993f47d6c2b51a3916a54d" Jan 23 13:58:00 crc kubenswrapper[4771]: I0123 13:58:00.541622 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:58:00 crc kubenswrapper[4771]: E0123 13:58:00.542106 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:15 crc kubenswrapper[4771]: I0123 13:58:15.228210 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:58:15 crc kubenswrapper[4771]: E0123 13:58:15.229004 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:27 crc kubenswrapper[4771]: I0123 13:58:27.228691 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:58:27 crc kubenswrapper[4771]: E0123 13:58:27.229598 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:42 crc kubenswrapper[4771]: I0123 13:58:42.228811 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:58:42 crc kubenswrapper[4771]: E0123 13:58:42.229641 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:47 crc kubenswrapper[4771]: I0123 13:58:47.604739 4771 scope.go:117] "RemoveContainer" containerID="c865b2bf5cc942e6c3f26dd2c3f3ff7a02e5a9cf8b2e8cef84a88d32613d3754" Jan 23 13:58:47 crc kubenswrapper[4771]: I0123 13:58:47.661295 4771 scope.go:117] "RemoveContainer" containerID="d75a57f96158eeaae9343d51d46dc7130b9ff3961c348006eb0ca56947114878" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.089238 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.093739 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.113940 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.222639 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9ks\" (UniqueName: \"kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.222755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.222840 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.325668 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.325896 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9ks\" (UniqueName: \"kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.325966 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.326294 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.326540 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.351076 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9ks\" (UniqueName: \"kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks\") pod \"redhat-operators-9ddgx\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.432179 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:49 crc kubenswrapper[4771]: I0123 13:58:49.981303 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:58:50 crc kubenswrapper[4771]: I0123 13:58:50.147736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerStarted","Data":"44591d9b7321ef737705de4aa1c830c8e946be3d903b7116dc97b42832a45582"} Jan 23 13:58:51 crc kubenswrapper[4771]: I0123 13:58:51.160221 4771 generic.go:334] "Generic (PLEG): container finished" podID="07935179-bca0-404d-9d21-4e474fbf7a29" containerID="2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14" exitCode=0 Jan 23 13:58:51 crc kubenswrapper[4771]: I0123 13:58:51.160325 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerDied","Data":"2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14"} Jan 23 13:58:51 crc kubenswrapper[4771]: I0123 13:58:51.163736 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:58:53 crc kubenswrapper[4771]: I0123 13:58:53.183148 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerStarted","Data":"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212"} Jan 23 13:58:55 crc kubenswrapper[4771]: I0123 13:58:55.205604 4771 generic.go:334] "Generic (PLEG): container finished" podID="07935179-bca0-404d-9d21-4e474fbf7a29" containerID="c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212" exitCode=0 Jan 23 13:58:55 crc kubenswrapper[4771]: I0123 13:58:55.205662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerDied","Data":"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212"} Jan 23 13:58:56 crc kubenswrapper[4771]: I0123 13:58:56.217789 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerStarted","Data":"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35"} Jan 23 13:58:56 crc kubenswrapper[4771]: I0123 13:58:56.228559 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:58:56 crc kubenswrapper[4771]: E0123 13:58:56.228920 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:58:56 crc kubenswrapper[4771]: I0123 13:58:56.241755 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ddgx" podStartSLOduration=2.683358636 podStartE2EDuration="7.241733534s" podCreationTimestamp="2026-01-23 13:58:49 +0000 UTC" firstStartedPulling="2026-01-23 13:58:51.163449356 +0000 UTC m=+1572.185986981" lastFinishedPulling="2026-01-23 13:58:55.721824254 +0000 UTC m=+1576.744361879" observedRunningTime="2026-01-23 13:58:56.235534459 +0000 UTC m=+1577.258072084" watchObservedRunningTime="2026-01-23 13:58:56.241733534 +0000 UTC m=+1577.264271159" Jan 23 13:58:56 crc kubenswrapper[4771]: I0123 13:58:56.948089 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:58:56 crc kubenswrapper[4771]: I0123 13:58:56.951105 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:56.999953 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.040696 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.040820 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fr7\" (UniqueName: \"kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.041060 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.146600 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.146693 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fr7\" (UniqueName: \"kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.146765 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.147355 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.147766 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.171722 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fr7\" (UniqueName: \"kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7\") pod \"certified-operators-d6wdz\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.309933 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:58:57 crc kubenswrapper[4771]: I0123 13:58:57.899828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:58:58 crc kubenswrapper[4771]: I0123 13:58:58.251142 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerStarted","Data":"f16f3b113d1d4c2aec953b5900dff7ec5845c8308c7cbde81d3caf000f9e1cdf"} Jan 23 13:58:59 crc kubenswrapper[4771]: I0123 13:58:59.265576 4771 generic.go:334] "Generic (PLEG): container finished" podID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerID="a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8" exitCode=0 Jan 23 13:58:59 crc kubenswrapper[4771]: I0123 13:58:59.265712 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerDied","Data":"a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8"} Jan 23 13:58:59 crc kubenswrapper[4771]: I0123 13:58:59.433839 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:58:59 crc kubenswrapper[4771]: I0123 13:58:59.433929 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:59:00 crc kubenswrapper[4771]: I0123 13:59:00.488682 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9ddgx" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="registry-server" probeResult="failure" output=< Jan 23 13:59:00 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 13:59:00 crc kubenswrapper[4771]: > Jan 23 13:59:01 crc kubenswrapper[4771]: I0123 13:59:01.289140 4771 generic.go:334] "Generic (PLEG): container finished" podID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerID="b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0" exitCode=0 Jan 23 13:59:01 crc kubenswrapper[4771]: I0123 13:59:01.289249 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerDied","Data":"b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0"} Jan 23 13:59:02 crc kubenswrapper[4771]: I0123 13:59:02.305320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerStarted","Data":"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc"} Jan 23 13:59:02 crc kubenswrapper[4771]: I0123 13:59:02.339132 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d6wdz" podStartSLOduration=3.891603493 podStartE2EDuration="6.339101337s" podCreationTimestamp="2026-01-23 13:58:56 +0000 UTC" firstStartedPulling="2026-01-23 13:58:59.268024396 +0000 UTC m=+1580.290562021" lastFinishedPulling="2026-01-23 13:59:01.71552224 +0000 UTC m=+1582.738059865" observedRunningTime="2026-01-23 13:59:02.333490239 +0000 UTC m=+1583.356027874" watchObservedRunningTime="2026-01-23 13:59:02.339101337 +0000 UTC m=+1583.361638972" Jan 23 13:59:07 crc kubenswrapper[4771]: I0123 13:59:07.310616 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:07 crc kubenswrapper[4771]: I0123 13:59:07.311363 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:07 crc kubenswrapper[4771]: I0123 13:59:07.361960 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:07 crc kubenswrapper[4771]: I0123 13:59:07.413041 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:07 crc kubenswrapper[4771]: I0123 13:59:07.604660 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:59:09 crc kubenswrapper[4771]: I0123 13:59:09.238864 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:59:09 crc kubenswrapper[4771]: E0123 13:59:09.239535 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:59:09 crc kubenswrapper[4771]: I0123 13:59:09.383396 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d6wdz" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="registry-server" containerID="cri-o://66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc" gracePeriod=2 Jan 23 13:59:09 crc kubenswrapper[4771]: I0123 13:59:09.489136 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:59:09 crc kubenswrapper[4771]: I0123 13:59:09.547308 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:59:09 crc kubenswrapper[4771]: I0123 13:59:09.950597 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.012294 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.087106 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fr7\" (UniqueName: \"kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7\") pod \"b947484c-df41-46e4-b72b-7d30e3db5de4\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.087159 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities\") pod \"b947484c-df41-46e4-b72b-7d30e3db5de4\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.087212 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content\") pod \"b947484c-df41-46e4-b72b-7d30e3db5de4\" (UID: \"b947484c-df41-46e4-b72b-7d30e3db5de4\") " Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.088526 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities" (OuterVolumeSpecName: "utilities") pod "b947484c-df41-46e4-b72b-7d30e3db5de4" (UID: "b947484c-df41-46e4-b72b-7d30e3db5de4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.094951 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7" (OuterVolumeSpecName: "kube-api-access-z5fr7") pod "b947484c-df41-46e4-b72b-7d30e3db5de4" (UID: "b947484c-df41-46e4-b72b-7d30e3db5de4"). InnerVolumeSpecName "kube-api-access-z5fr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.143013 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b947484c-df41-46e4-b72b-7d30e3db5de4" (UID: "b947484c-df41-46e4-b72b-7d30e3db5de4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.189450 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5fr7\" (UniqueName: \"kubernetes.io/projected/b947484c-df41-46e4-b72b-7d30e3db5de4-kube-api-access-z5fr7\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.189486 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.189495 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b947484c-df41-46e4-b72b-7d30e3db5de4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.401012 4771 generic.go:334] "Generic (PLEG): container finished" podID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerID="66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc" exitCode=0 Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.401082 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerDied","Data":"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc"} Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.401134 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6wdz" event={"ID":"b947484c-df41-46e4-b72b-7d30e3db5de4","Type":"ContainerDied","Data":"f16f3b113d1d4c2aec953b5900dff7ec5845c8308c7cbde81d3caf000f9e1cdf"} Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.401172 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6wdz" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.401179 4771 scope.go:117] "RemoveContainer" containerID="66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.435617 4771 scope.go:117] "RemoveContainer" containerID="b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.445054 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.457939 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d6wdz"] Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.463582 4771 scope.go:117] "RemoveContainer" containerID="a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.510488 4771 scope.go:117] "RemoveContainer" containerID="66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc" Jan 23 13:59:10 crc kubenswrapper[4771]: E0123 13:59:10.511172 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc\": container with ID starting with 66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc not found: ID does not exist" containerID="66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.511243 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc"} err="failed to get container status \"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc\": rpc error: code = NotFound desc = could not find container \"66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc\": container with ID starting with 66c1102ce3c8189d89dcf0150e21f74ddf63d937bd02fc6c10b8fe743be1bcbc not found: ID does not exist" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.511283 4771 scope.go:117] "RemoveContainer" containerID="b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0" Jan 23 13:59:10 crc kubenswrapper[4771]: E0123 13:59:10.511901 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0\": container with ID starting with b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0 not found: ID does not exist" containerID="b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.511933 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0"} err="failed to get container status \"b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0\": rpc error: code = NotFound desc = could not find container \"b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0\": container with ID starting with b8ce0f4768b249c381d1c0029a5934d34ec69abffe56f0f18b3ce2aee63647f0 not found: ID does not exist" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.511952 4771 scope.go:117] "RemoveContainer" containerID="a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8" Jan 23 13:59:10 crc kubenswrapper[4771]: E0123 13:59:10.512364 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8\": container with ID starting with a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8 not found: ID does not exist" containerID="a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8" Jan 23 13:59:10 crc kubenswrapper[4771]: I0123 13:59:10.512391 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8"} err="failed to get container status \"a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8\": rpc error: code = NotFound desc = could not find container \"a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8\": container with ID starting with a641196da61e20c1605760b874a94428f6a3e63c28eb5c55b8a90a8871fba8b8 not found: ID does not exist" Jan 23 13:59:11 crc kubenswrapper[4771]: I0123 13:59:11.243191 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" path="/var/lib/kubelet/pods/b947484c-df41-46e4-b72b-7d30e3db5de4/volumes" Jan 23 13:59:11 crc kubenswrapper[4771]: I0123 13:59:11.419331 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ddgx" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="registry-server" containerID="cri-o://df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35" gracePeriod=2 Jan 23 13:59:11 crc kubenswrapper[4771]: I0123 13:59:11.980825 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.140787 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r9ks\" (UniqueName: \"kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks\") pod \"07935179-bca0-404d-9d21-4e474fbf7a29\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.140906 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities\") pod \"07935179-bca0-404d-9d21-4e474fbf7a29\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.141122 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content\") pod \"07935179-bca0-404d-9d21-4e474fbf7a29\" (UID: \"07935179-bca0-404d-9d21-4e474fbf7a29\") " Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.141776 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities" (OuterVolumeSpecName: "utilities") pod "07935179-bca0-404d-9d21-4e474fbf7a29" (UID: "07935179-bca0-404d-9d21-4e474fbf7a29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.146998 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks" (OuterVolumeSpecName: "kube-api-access-9r9ks") pod "07935179-bca0-404d-9d21-4e474fbf7a29" (UID: "07935179-bca0-404d-9d21-4e474fbf7a29"). InnerVolumeSpecName "kube-api-access-9r9ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.244079 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r9ks\" (UniqueName: \"kubernetes.io/projected/07935179-bca0-404d-9d21-4e474fbf7a29-kube-api-access-9r9ks\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.244123 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.282649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07935179-bca0-404d-9d21-4e474fbf7a29" (UID: "07935179-bca0-404d-9d21-4e474fbf7a29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.346919 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07935179-bca0-404d-9d21-4e474fbf7a29-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.431375 4771 generic.go:334] "Generic (PLEG): container finished" podID="07935179-bca0-404d-9d21-4e474fbf7a29" containerID="df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35" exitCode=0 Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.431441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerDied","Data":"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35"} Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.431476 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ddgx" event={"ID":"07935179-bca0-404d-9d21-4e474fbf7a29","Type":"ContainerDied","Data":"44591d9b7321ef737705de4aa1c830c8e946be3d903b7116dc97b42832a45582"} Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.431560 4771 scope.go:117] "RemoveContainer" containerID="df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.432922 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ddgx" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.466465 4771 scope.go:117] "RemoveContainer" containerID="c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.479945 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.490955 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ddgx"] Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.509304 4771 scope.go:117] "RemoveContainer" containerID="2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.546054 4771 scope.go:117] "RemoveContainer" containerID="df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35" Jan 23 13:59:12 crc kubenswrapper[4771]: E0123 13:59:12.547172 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35\": container with ID starting with df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35 not found: ID does not exist" containerID="df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.547214 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35"} err="failed to get container status \"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35\": rpc error: code = NotFound desc = could not find container \"df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35\": container with ID starting with df2dadb4e894f148297508138815bd5fe26663afdff4cebef9168eff234dfe35 not found: ID does not exist" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.547248 4771 scope.go:117] "RemoveContainer" containerID="c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212" Jan 23 13:59:12 crc kubenswrapper[4771]: E0123 13:59:12.547785 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212\": container with ID starting with c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212 not found: ID does not exist" containerID="c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.547813 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212"} err="failed to get container status \"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212\": rpc error: code = NotFound desc = could not find container \"c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212\": container with ID starting with c8412aeda3d36cbbb60ea46fa2f4b561484ce5de0809a5e9765a7d1752852212 not found: ID does not exist" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.547831 4771 scope.go:117] "RemoveContainer" containerID="2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14" Jan 23 13:59:12 crc kubenswrapper[4771]: E0123 13:59:12.548263 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14\": container with ID starting with 2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14 not found: ID does not exist" containerID="2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14" Jan 23 13:59:12 crc kubenswrapper[4771]: I0123 13:59:12.548293 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14"} err="failed to get container status \"2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14\": rpc error: code = NotFound desc = could not find container \"2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14\": container with ID starting with 2b8d9cd0b2a0baef903218020e7614e0283a1508d3a766772b5e331cf5253e14 not found: ID does not exist" Jan 23 13:59:13 crc kubenswrapper[4771]: I0123 13:59:13.254291 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" path="/var/lib/kubelet/pods/07935179-bca0-404d-9d21-4e474fbf7a29/volumes" Jan 23 13:59:24 crc kubenswrapper[4771]: I0123 13:59:24.228140 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:59:24 crc kubenswrapper[4771]: E0123 13:59:24.229206 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:59:36 crc kubenswrapper[4771]: I0123 13:59:36.229175 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:59:36 crc kubenswrapper[4771]: E0123 13:59:36.230250 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.057593 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-q42h4"] Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.070388 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-q42h4"] Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.542839 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.543939 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="extract-utilities" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.543967 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="extract-utilities" Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.543988 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="extract-content" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.543997 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="extract-content" Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.544015 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="extract-utilities" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.544024 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="extract-utilities" Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.544063 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.544071 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.544089 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="extract-content" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.544096 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="extract-content" Jan 23 13:59:46 crc kubenswrapper[4771]: E0123 13:59:46.544115 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.544123 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.544400 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="07935179-bca0-404d-9d21-4e474fbf7a29" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.548367 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b947484c-df41-46e4-b72b-7d30e3db5de4" containerName="registry-server" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.550911 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.605546 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.605673 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwmj\" (UniqueName: \"kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.605739 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.613817 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.709004 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.709083 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwmj\" (UniqueName: \"kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.709132 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.709813 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.710315 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.731133 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwmj\" (UniqueName: \"kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj\") pod \"community-operators-sxdv7\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:46 crc kubenswrapper[4771]: I0123 13:59:46.915051 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.080122 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-w664d"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.101087 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-62ab-account-create-update-fwllw"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.114671 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6997-account-create-update-8m4tt"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.135945 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-zngng"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.164895 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75db-account-create-update-fhlwc"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.183713 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-62ab-account-create-update-fwllw"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.205226 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-w664d"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.205302 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-75db-account-create-update-fhlwc"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.216279 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-zngng"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.233382 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 13:59:47 crc kubenswrapper[4771]: E0123 13:59:47.233831 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.254162 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5491bbf3-e170-4960-8f02-f9a9c0d5094e" path="/var/lib/kubelet/pods/5491bbf3-e170-4960-8f02-f9a9c0d5094e/volumes" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.256307 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c816062-a5de-4c59-9c07-fa34ad4e8966" path="/var/lib/kubelet/pods/6c816062-a5de-4c59-9c07-fa34ad4e8966/volumes" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.256964 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891c08fa-6bf5-4df9-b57c-9af771aab285" path="/var/lib/kubelet/pods/891c08fa-6bf5-4df9-b57c-9af771aab285/volumes" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.257603 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e51270-e42d-4da2-bddd-22a2b4c2fa44" path="/var/lib/kubelet/pods/97e51270-e42d-4da2-bddd-22a2b4c2fa44/volumes" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.258205 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5b01d0-80b6-476d-90c7-960f8bcf901b" path="/var/lib/kubelet/pods/fb5b01d0-80b6-476d-90c7-960f8bcf901b/volumes" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.262723 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6997-account-create-update-8m4tt"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.654427 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.740857 4771 scope.go:117] "RemoveContainer" containerID="cb4bd0ab677fb1845d3e7152eea904d93b713d0562e6b1ba8d43056f0d70153c" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.783013 4771 scope.go:117] "RemoveContainer" containerID="aaac9362f990de0515778582967e45753ed2e06623140898141c3b1004f3e23e" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.816206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerStarted","Data":"92a8dc2dc191144b16c91ecaedc507802e9b66a44d4c0717e0131ce710d18c3c"} Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.845377 4771 scope.go:117] "RemoveContainer" containerID="a6a147cec4259c145be16e972687ccc3ac25e1b26bdf135e57cb073e0c8b71d0" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.922815 4771 scope.go:117] "RemoveContainer" containerID="f97bc929f4863339e563b135b7e32c424f6ab238a9c712fd4a10a526ab07795c" Jan 23 13:59:47 crc kubenswrapper[4771]: I0123 13:59:47.978525 4771 scope.go:117] "RemoveContainer" containerID="cc7489c78cd7f1871f2685c2026230c4f69c44683e19719eae0503bf57f423fe" Jan 23 13:59:48 crc kubenswrapper[4771]: I0123 13:59:48.829295 4771 generic.go:334] "Generic (PLEG): container finished" podID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerID="83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c" exitCode=0 Jan 23 13:59:48 crc kubenswrapper[4771]: I0123 13:59:48.829367 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerDied","Data":"83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c"} Jan 23 13:59:49 crc kubenswrapper[4771]: I0123 13:59:49.242603 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a93887d-524c-4c98-b15f-b5370b5b3fb2" path="/var/lib/kubelet/pods/1a93887d-524c-4c98-b15f-b5370b5b3fb2/volumes" Jan 23 13:59:50 crc kubenswrapper[4771]: I0123 13:59:50.854945 4771 generic.go:334] "Generic (PLEG): container finished" podID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerID="f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0" exitCode=0 Jan 23 13:59:50 crc kubenswrapper[4771]: I0123 13:59:50.855077 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerDied","Data":"f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0"} Jan 23 13:59:50 crc kubenswrapper[4771]: I0123 13:59:50.923924 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 13:59:50 crc kubenswrapper[4771]: I0123 13:59:50.926326 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:50 crc kubenswrapper[4771]: I0123 13:59:50.948064 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.120919 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.121321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6phvw\" (UniqueName: \"kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.121612 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.224100 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.224215 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6phvw\" (UniqueName: \"kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.224278 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.224922 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.224954 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.248862 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6phvw\" (UniqueName: \"kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw\") pod \"redhat-marketplace-996nt\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.250891 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 13:59:51 crc kubenswrapper[4771]: I0123 13:59:51.965340 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 13:59:52 crc kubenswrapper[4771]: I0123 13:59:52.906626 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerID="c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b" exitCode=0 Jan 23 13:59:52 crc kubenswrapper[4771]: I0123 13:59:52.907012 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerDied","Data":"c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b"} Jan 23 13:59:52 crc kubenswrapper[4771]: I0123 13:59:52.907046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerStarted","Data":"4dc1697f6f525b67b636795c56bb40f25b2a755f5e84314d5dcd0b577ab34055"} Jan 23 13:59:52 crc kubenswrapper[4771]: I0123 13:59:52.913107 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerStarted","Data":"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d"} Jan 23 13:59:52 crc kubenswrapper[4771]: I0123 13:59:52.955390 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sxdv7" podStartSLOduration=4.236799425 podStartE2EDuration="6.955363424s" podCreationTimestamp="2026-01-23 13:59:46 +0000 UTC" firstStartedPulling="2026-01-23 13:59:48.831208506 +0000 UTC m=+1629.853746131" lastFinishedPulling="2026-01-23 13:59:51.549772505 +0000 UTC m=+1632.572310130" observedRunningTime="2026-01-23 13:59:52.95114473 +0000 UTC m=+1633.973682355" watchObservedRunningTime="2026-01-23 13:59:52.955363424 +0000 UTC m=+1633.977901049" Jan 23 13:59:53 crc kubenswrapper[4771]: I0123 13:59:53.927027 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerStarted","Data":"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f"} Jan 23 13:59:54 crc kubenswrapper[4771]: I0123 13:59:54.937943 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerID="0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f" exitCode=0 Jan 23 13:59:54 crc kubenswrapper[4771]: I0123 13:59:54.937999 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerDied","Data":"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f"} Jan 23 13:59:55 crc kubenswrapper[4771]: I0123 13:59:55.949935 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerStarted","Data":"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983"} Jan 23 13:59:55 crc kubenswrapper[4771]: I0123 13:59:55.980920 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-996nt" podStartSLOduration=3.255474185 podStartE2EDuration="5.980891924s" podCreationTimestamp="2026-01-23 13:59:50 +0000 UTC" firstStartedPulling="2026-01-23 13:59:52.910043708 +0000 UTC m=+1633.932581333" lastFinishedPulling="2026-01-23 13:59:55.635461447 +0000 UTC m=+1636.657999072" observedRunningTime="2026-01-23 13:59:55.969338985 +0000 UTC m=+1636.991876610" watchObservedRunningTime="2026-01-23 13:59:55.980891924 +0000 UTC m=+1637.003429549" Jan 23 13:59:56 crc kubenswrapper[4771]: I0123 13:59:56.917107 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:56 crc kubenswrapper[4771]: I0123 13:59:56.917183 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:56 crc kubenswrapper[4771]: I0123 13:59:56.989124 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:57 crc kubenswrapper[4771]: I0123 13:59:57.052494 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.105868 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.106672 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sxdv7" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="registry-server" containerID="cri-o://2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d" gracePeriod=2 Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.712077 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.891197 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities\") pod \"6e61a031-5410-43db-8d8a-6dab0d81fcad\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.891484 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content\") pod \"6e61a031-5410-43db-8d8a-6dab0d81fcad\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.891567 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nwmj\" (UniqueName: \"kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj\") pod \"6e61a031-5410-43db-8d8a-6dab0d81fcad\" (UID: \"6e61a031-5410-43db-8d8a-6dab0d81fcad\") " Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.894504 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities" (OuterVolumeSpecName: "utilities") pod "6e61a031-5410-43db-8d8a-6dab0d81fcad" (UID: "6e61a031-5410-43db-8d8a-6dab0d81fcad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.901534 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj" (OuterVolumeSpecName: "kube-api-access-9nwmj") pod "6e61a031-5410-43db-8d8a-6dab0d81fcad" (UID: "6e61a031-5410-43db-8d8a-6dab0d81fcad"). InnerVolumeSpecName "kube-api-access-9nwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.970196 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e61a031-5410-43db-8d8a-6dab0d81fcad" (UID: "6e61a031-5410-43db-8d8a-6dab0d81fcad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.994868 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.996261 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nwmj\" (UniqueName: \"kubernetes.io/projected/6e61a031-5410-43db-8d8a-6dab0d81fcad-kube-api-access-9nwmj\") on node \"crc\" DevicePath \"\"" Jan 23 13:59:59 crc kubenswrapper[4771]: I0123 13:59:59.996308 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e61a031-5410-43db-8d8a-6dab0d81fcad-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.009895 4771 generic.go:334] "Generic (PLEG): container finished" podID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerID="2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d" exitCode=0 Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.009976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerDied","Data":"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d"} Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.010021 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sxdv7" event={"ID":"6e61a031-5410-43db-8d8a-6dab0d81fcad","Type":"ContainerDied","Data":"92a8dc2dc191144b16c91ecaedc507802e9b66a44d4c0717e0131ce710d18c3c"} Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.010067 4771 scope.go:117] "RemoveContainer" containerID="2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.010139 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sxdv7" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.046709 4771 scope.go:117] "RemoveContainer" containerID="f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.063268 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.076895 4771 scope.go:117] "RemoveContainer" containerID="83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.084368 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sxdv7"] Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.127789 4771 scope.go:117] "RemoveContainer" containerID="2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d" Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.131580 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d\": container with ID starting with 2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d not found: ID does not exist" containerID="2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.131657 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d"} err="failed to get container status \"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d\": rpc error: code = NotFound desc = could not find container \"2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d\": container with ID starting with 2bf6d1794c924fe368307501499c4b61d2873e59a2da96c5ce4d2e2e6ee4f18d not found: ID does not exist" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.131704 4771 scope.go:117] "RemoveContainer" containerID="f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0" Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.132259 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0\": container with ID starting with f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0 not found: ID does not exist" containerID="f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.132292 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0"} err="failed to get container status \"f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0\": rpc error: code = NotFound desc = could not find container \"f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0\": container with ID starting with f453c409b7f778162d9b74cfc713b3460e1ef65cb4a7635fff4e2ed2cb2b1ad0 not found: ID does not exist" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.132306 4771 scope.go:117] "RemoveContainer" containerID="83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c" Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.132860 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c\": container with ID starting with 83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c not found: ID does not exist" containerID="83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.132883 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c"} err="failed to get container status \"83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c\": rpc error: code = NotFound desc = could not find container \"83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c\": container with ID starting with 83cf52744fd751b0d7cc3e22bd6f262b942e91c36fe7f0df34dfa083b80f2d0c not found: ID does not exist" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.158652 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn"] Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.159602 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="extract-utilities" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.159630 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="extract-utilities" Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.159671 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="registry-server" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.159685 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="registry-server" Jan 23 14:00:00 crc kubenswrapper[4771]: E0123 14:00:00.159708 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="extract-content" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.159717 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="extract-content" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.160012 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" containerName="registry-server" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.161264 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.164927 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.166502 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.181812 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn"] Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.301773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.302258 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrl4p\" (UniqueName: \"kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.302570 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.405702 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.405878 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrl4p\" (UniqueName: \"kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.405934 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.406870 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.413560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.436279 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrl4p\" (UniqueName: \"kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p\") pod \"collect-profiles-29486280-q46pn\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:00 crc kubenswrapper[4771]: I0123 14:00:00.486724 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:01 crc kubenswrapper[4771]: W0123 14:00:01.123317 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1ca2c3_4bbe_4b25_a648_538a05e742cd.slice/crio-e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d WatchSource:0}: Error finding container e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d: Status 404 returned error can't find the container with id e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.123648 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn"] Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.229383 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:00:01 crc kubenswrapper[4771]: E0123 14:00:01.229787 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.252846 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e61a031-5410-43db-8d8a-6dab0d81fcad" path="/var/lib/kubelet/pods/6e61a031-5410-43db-8d8a-6dab0d81fcad/volumes" Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.254277 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.254323 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:01 crc kubenswrapper[4771]: I0123 14:00:01.324661 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:02 crc kubenswrapper[4771]: I0123 14:00:02.047885 4771 generic.go:334] "Generic (PLEG): container finished" podID="cf1ca2c3-4bbe-4b25-a648-538a05e742cd" containerID="49a7f7d9b91dc929a1ead7f1fb924d020cb58649d231ac5dccb357676bbc3c47" exitCode=0 Jan 23 14:00:02 crc kubenswrapper[4771]: I0123 14:00:02.048013 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" event={"ID":"cf1ca2c3-4bbe-4b25-a648-538a05e742cd","Type":"ContainerDied","Data":"49a7f7d9b91dc929a1ead7f1fb924d020cb58649d231ac5dccb357676bbc3c47"} Jan 23 14:00:02 crc kubenswrapper[4771]: I0123 14:00:02.049427 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" event={"ID":"cf1ca2c3-4bbe-4b25-a648-538a05e742cd","Type":"ContainerStarted","Data":"e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d"} Jan 23 14:00:02 crc kubenswrapper[4771]: I0123 14:00:02.110939 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.313344 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.450527 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.598135 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume\") pod \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.598212 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume\") pod \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.598468 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrl4p\" (UniqueName: \"kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p\") pod \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\" (UID: \"cf1ca2c3-4bbe-4b25-a648-538a05e742cd\") " Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.599514 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "cf1ca2c3-4bbe-4b25-a648-538a05e742cd" (UID: "cf1ca2c3-4bbe-4b25-a648-538a05e742cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.605577 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cf1ca2c3-4bbe-4b25-a648-538a05e742cd" (UID: "cf1ca2c3-4bbe-4b25-a648-538a05e742cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.606038 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p" (OuterVolumeSpecName: "kube-api-access-jrl4p") pod "cf1ca2c3-4bbe-4b25-a648-538a05e742cd" (UID: "cf1ca2c3-4bbe-4b25-a648-538a05e742cd"). InnerVolumeSpecName "kube-api-access-jrl4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.701562 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.701603 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:03 crc kubenswrapper[4771]: I0123 14:00:03.701627 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrl4p\" (UniqueName: \"kubernetes.io/projected/cf1ca2c3-4bbe-4b25-a648-538a05e742cd-kube-api-access-jrl4p\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.074652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" event={"ID":"cf1ca2c3-4bbe-4b25-a648-538a05e742cd","Type":"ContainerDied","Data":"e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d"} Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.074716 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3768918b7449937954d2c915c96e80dcc2e69d04bece40e0136ae77dc1d5e6d" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.074663 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.074819 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-996nt" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="registry-server" containerID="cri-o://81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983" gracePeriod=2 Jan 23 14:00:04 crc kubenswrapper[4771]: E0123 14:00:04.361047 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a3a3745_0bf3_4f6e_b3a6_faefde382006.slice/crio-81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a3a3745_0bf3_4f6e_b3a6_faefde382006.slice/crio-conmon-81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.569503 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.730590 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content\") pod \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.730835 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities\") pod \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.730863 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6phvw\" (UniqueName: \"kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw\") pod \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\" (UID: \"6a3a3745-0bf3-4f6e-b3a6-faefde382006\") " Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.732576 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities" (OuterVolumeSpecName: "utilities") pod "6a3a3745-0bf3-4f6e-b3a6-faefde382006" (UID: "6a3a3745-0bf3-4f6e-b3a6-faefde382006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.738691 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw" (OuterVolumeSpecName: "kube-api-access-6phvw") pod "6a3a3745-0bf3-4f6e-b3a6-faefde382006" (UID: "6a3a3745-0bf3-4f6e-b3a6-faefde382006"). InnerVolumeSpecName "kube-api-access-6phvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.754919 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a3a3745-0bf3-4f6e-b3a6-faefde382006" (UID: "6a3a3745-0bf3-4f6e-b3a6-faefde382006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.834700 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.834756 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6phvw\" (UniqueName: \"kubernetes.io/projected/6a3a3745-0bf3-4f6e-b3a6-faefde382006-kube-api-access-6phvw\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:04 crc kubenswrapper[4771]: I0123 14:00:04.834773 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3a3745-0bf3-4f6e-b3a6-faefde382006-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.087980 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerID="81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983" exitCode=0 Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.088090 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-996nt" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.088086 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerDied","Data":"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983"} Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.089292 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-996nt" event={"ID":"6a3a3745-0bf3-4f6e-b3a6-faefde382006","Type":"ContainerDied","Data":"4dc1697f6f525b67b636795c56bb40f25b2a755f5e84314d5dcd0b577ab34055"} Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.089343 4771 scope.go:117] "RemoveContainer" containerID="81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.126929 4771 scope.go:117] "RemoveContainer" containerID="0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.135928 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.150111 4771 scope.go:117] "RemoveContainer" containerID="c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.162459 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-996nt"] Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.224918 4771 scope.go:117] "RemoveContainer" containerID="81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983" Jan 23 14:00:05 crc kubenswrapper[4771]: E0123 14:00:05.225575 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983\": container with ID starting with 81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983 not found: ID does not exist" containerID="81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.225657 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983"} err="failed to get container status \"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983\": rpc error: code = NotFound desc = could not find container \"81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983\": container with ID starting with 81c9536c37286d5d8f0f3cc4f4df2022d5a5d76527e3d52b89211b4e0d0c6983 not found: ID does not exist" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.225700 4771 scope.go:117] "RemoveContainer" containerID="0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f" Jan 23 14:00:05 crc kubenswrapper[4771]: E0123 14:00:05.226199 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f\": container with ID starting with 0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f not found: ID does not exist" containerID="0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.226307 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f"} err="failed to get container status \"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f\": rpc error: code = NotFound desc = could not find container \"0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f\": container with ID starting with 0229878ae9be85bd3294f80f0a77129210dc60bdf2374430fab5502703fc009f not found: ID does not exist" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.226423 4771 scope.go:117] "RemoveContainer" containerID="c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b" Jan 23 14:00:05 crc kubenswrapper[4771]: E0123 14:00:05.226779 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b\": container with ID starting with c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b not found: ID does not exist" containerID="c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.226819 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b"} err="failed to get container status \"c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b\": rpc error: code = NotFound desc = could not find container \"c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b\": container with ID starting with c5ec3e9c572d8bc8fc35013c1fe84ec773ab9bab798aca2f72b4978b160b8c9b not found: ID does not exist" Jan 23 14:00:05 crc kubenswrapper[4771]: I0123 14:00:05.246564 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" path="/var/lib/kubelet/pods/6a3a3745-0bf3-4f6e-b3a6-faefde382006/volumes" Jan 23 14:00:09 crc kubenswrapper[4771]: I0123 14:00:09.168015 4771 generic.go:334] "Generic (PLEG): container finished" podID="30a335c9-357c-4ea4-8737-d8d795f1a05d" containerID="204ada120862966a73f158cc82b24d0aa294b29c7de06d90d1f4ce4b7d1f6acf" exitCode=0 Jan 23 14:00:09 crc kubenswrapper[4771]: I0123 14:00:09.168118 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" event={"ID":"30a335c9-357c-4ea4-8737-d8d795f1a05d","Type":"ContainerDied","Data":"204ada120862966a73f158cc82b24d0aa294b29c7de06d90d1f4ce4b7d1f6acf"} Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.809774 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.894589 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle\") pod \"30a335c9-357c-4ea4-8737-d8d795f1a05d\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.895323 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory\") pod \"30a335c9-357c-4ea4-8737-d8d795f1a05d\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.895401 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7zpl\" (UniqueName: \"kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl\") pod \"30a335c9-357c-4ea4-8737-d8d795f1a05d\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.895457 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam\") pod \"30a335c9-357c-4ea4-8737-d8d795f1a05d\" (UID: \"30a335c9-357c-4ea4-8737-d8d795f1a05d\") " Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.903201 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "30a335c9-357c-4ea4-8737-d8d795f1a05d" (UID: "30a335c9-357c-4ea4-8737-d8d795f1a05d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.904623 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl" (OuterVolumeSpecName: "kube-api-access-z7zpl") pod "30a335c9-357c-4ea4-8737-d8d795f1a05d" (UID: "30a335c9-357c-4ea4-8737-d8d795f1a05d"). InnerVolumeSpecName "kube-api-access-z7zpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.934494 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory" (OuterVolumeSpecName: "inventory") pod "30a335c9-357c-4ea4-8737-d8d795f1a05d" (UID: "30a335c9-357c-4ea4-8737-d8d795f1a05d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.934634 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "30a335c9-357c-4ea4-8737-d8d795f1a05d" (UID: "30a335c9-357c-4ea4-8737-d8d795f1a05d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.998651 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.998726 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.998737 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7zpl\" (UniqueName: \"kubernetes.io/projected/30a335c9-357c-4ea4-8737-d8d795f1a05d-kube-api-access-z7zpl\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:10 crc kubenswrapper[4771]: I0123 14:00:10.998746 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/30a335c9-357c-4ea4-8737-d8d795f1a05d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.194282 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" event={"ID":"30a335c9-357c-4ea4-8737-d8d795f1a05d","Type":"ContainerDied","Data":"af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74"} Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.194488 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5f2584bc4175c06900263d8c910f3095f0ac9e73490cf2b074640c9b1b4c74" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.194436 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.347607 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs"] Jan 23 14:00:11 crc kubenswrapper[4771]: E0123 14:00:11.348264 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="extract-content" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348283 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="extract-content" Jan 23 14:00:11 crc kubenswrapper[4771]: E0123 14:00:11.348299 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="extract-utilities" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348309 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="extract-utilities" Jan 23 14:00:11 crc kubenswrapper[4771]: E0123 14:00:11.348327 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="registry-server" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348334 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="registry-server" Jan 23 14:00:11 crc kubenswrapper[4771]: E0123 14:00:11.348354 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30a335c9-357c-4ea4-8737-d8d795f1a05d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348361 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a335c9-357c-4ea4-8737-d8d795f1a05d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 14:00:11 crc kubenswrapper[4771]: E0123 14:00:11.348391 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1ca2c3-4bbe-4b25-a648-538a05e742cd" containerName="collect-profiles" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348397 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1ca2c3-4bbe-4b25-a648-538a05e742cd" containerName="collect-profiles" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348637 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3a3745-0bf3-4f6e-b3a6-faefde382006" containerName="registry-server" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348651 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1ca2c3-4bbe-4b25-a648-538a05e742cd" containerName="collect-profiles" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.348680 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="30a335c9-357c-4ea4-8737-d8d795f1a05d" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.349600 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.356700 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.357099 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.357306 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.357507 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.360502 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs"] Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.515471 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.515697 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmhf\" (UniqueName: \"kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.515772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.636082 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.636704 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmhf\" (UniqueName: \"kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.636850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.642489 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.642904 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.655324 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmhf\" (UniqueName: \"kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-dspqs\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:11 crc kubenswrapper[4771]: I0123 14:00:11.695434 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:00:12 crc kubenswrapper[4771]: I0123 14:00:12.040191 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-r9jcp"] Jan 23 14:00:12 crc kubenswrapper[4771]: I0123 14:00:12.054478 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-r9jcp"] Jan 23 14:00:12 crc kubenswrapper[4771]: I0123 14:00:12.273656 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs"] Jan 23 14:00:13 crc kubenswrapper[4771]: I0123 14:00:13.216385 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" event={"ID":"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1","Type":"ContainerStarted","Data":"dbabb3be0c79f818b1c0cd6cf39b1e54de85c308b37ba1c215df402d389c7f8d"} Jan 23 14:00:13 crc kubenswrapper[4771]: I0123 14:00:13.218571 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" event={"ID":"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1","Type":"ContainerStarted","Data":"1e3a7ba1abb863b2be0a67eedbe3e2043ef82c3be7f2121e257901ec1a76149f"} Jan 23 14:00:13 crc kubenswrapper[4771]: I0123 14:00:13.228718 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:00:13 crc kubenswrapper[4771]: E0123 14:00:13.229012 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:00:13 crc kubenswrapper[4771]: I0123 14:00:13.241713 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d485bf-c7b4-41ab-b3d1-117d98e1df46" path="/var/lib/kubelet/pods/89d485bf-c7b4-41ab-b3d1-117d98e1df46/volumes" Jan 23 14:00:13 crc kubenswrapper[4771]: I0123 14:00:13.244648 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" podStartSLOduration=1.6794434150000002 podStartE2EDuration="2.244617957s" podCreationTimestamp="2026-01-23 14:00:11 +0000 UTC" firstStartedPulling="2026-01-23 14:00:12.28332836 +0000 UTC m=+1653.305865985" lastFinishedPulling="2026-01-23 14:00:12.848502902 +0000 UTC m=+1653.871040527" observedRunningTime="2026-01-23 14:00:13.232210221 +0000 UTC m=+1654.254747836" watchObservedRunningTime="2026-01-23 14:00:13.244617957 +0000 UTC m=+1654.267155592" Jan 23 14:00:18 crc kubenswrapper[4771]: I0123 14:00:18.052189 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-847pr"] Jan 23 14:00:18 crc kubenswrapper[4771]: I0123 14:00:18.070267 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7213-account-create-update-z86pp"] Jan 23 14:00:18 crc kubenswrapper[4771]: I0123 14:00:18.083271 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-847pr"] Jan 23 14:00:18 crc kubenswrapper[4771]: I0123 14:00:18.096680 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7213-account-create-update-z86pp"] Jan 23 14:00:19 crc kubenswrapper[4771]: I0123 14:00:19.250889 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacf8e04-67b0-426d-a9fb-6eddaf9d2887" path="/var/lib/kubelet/pods/aacf8e04-67b0-426d-a9fb-6eddaf9d2887/volumes" Jan 23 14:00:19 crc kubenswrapper[4771]: I0123 14:00:19.253719 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60896ed-8589-4227-b109-0350ff91d3d2" path="/var/lib/kubelet/pods/c60896ed-8589-4227-b109-0350ff91d3d2/volumes" Jan 23 14:00:26 crc kubenswrapper[4771]: I0123 14:00:26.229535 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:00:26 crc kubenswrapper[4771]: E0123 14:00:26.230552 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.056505 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-n7p4s"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.071527 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-g8xc5"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.082466 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-67de-account-create-update-v9cgb"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.095063 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f90b-account-create-update-fmb96"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.106807 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-g8xc5"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.118593 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-67de-account-create-update-v9cgb"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.133196 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-n7p4s"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.150233 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f90b-account-create-update-fmb96"] Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.244142 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0916c80f-b8f7-4594-9409-fa66861ec3be" path="/var/lib/kubelet/pods/0916c80f-b8f7-4594-9409-fa66861ec3be/volumes" Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.246523 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19bf99b9-22eb-424f-b882-3f65e37f71fa" path="/var/lib/kubelet/pods/19bf99b9-22eb-424f-b882-3f65e37f71fa/volumes" Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.247530 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a6fa30-c5bf-4a35-981a-0681769b8da5" path="/var/lib/kubelet/pods/89a6fa30-c5bf-4a35-981a-0681769b8da5/volumes" Jan 23 14:00:31 crc kubenswrapper[4771]: I0123 14:00:31.248388 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b" path="/var/lib/kubelet/pods/bdf937b2-cc0d-46a0-a3a7-f7cf3c25653b/volumes" Jan 23 14:00:41 crc kubenswrapper[4771]: I0123 14:00:41.229387 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:00:41 crc kubenswrapper[4771]: E0123 14:00:41.230524 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.062884 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kd96t"] Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.072375 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d677-account-create-update-blbpn"] Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.083359 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d677-account-create-update-blbpn"] Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.093312 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kd96t"] Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.256111 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b08526-186a-47d2-b829-e8e85677343d" path="/var/lib/kubelet/pods/38b08526-186a-47d2-b829-e8e85677343d/volumes" Jan 23 14:00:45 crc kubenswrapper[4771]: I0123 14:00:45.257309 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06c31d6-aa39-4c7a-999a-f94f49e13a43" path="/var/lib/kubelet/pods/a06c31d6-aa39-4c7a-999a-f94f49e13a43/volumes" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.255208 4771 scope.go:117] "RemoveContainer" containerID="bba9ce7364285e275aa3b9c9e231181fe047fbcc44dc04d10f026d371c03a11c" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.285706 4771 scope.go:117] "RemoveContainer" containerID="859d79b828dbf7e9daaa88ef38ed160053969a50e469c2299c23907fba71922d" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.338989 4771 scope.go:117] "RemoveContainer" containerID="656cd170138c0c79d8cd8bfb79f693a7a05eef253e7d7245c473cfdb0341bccb" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.361077 4771 scope.go:117] "RemoveContainer" containerID="701a8c0b0333eafb395408c1ec27f8e2a8f13327c0d590a01625bef4a2127ad6" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.387908 4771 scope.go:117] "RemoveContainer" containerID="16fdffd4363663891bedc2baa8b1d277ddc917de0647d6707afba8db55eb8f13" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.432290 4771 scope.go:117] "RemoveContainer" containerID="7d5a14df78fe22eeaacc1030a0f4bf464249eb52d9a846c50fb640b65f9700cc" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.486623 4771 scope.go:117] "RemoveContainer" containerID="94d0bb47ad9e6559e7f42169442801d1066ac95724b506586577a8f023776155" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.545933 4771 scope.go:117] "RemoveContainer" containerID="a7277b542fcde208d3b5999a839053f639daf47d31622c39fac505422cb05abc" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.569975 4771 scope.go:117] "RemoveContainer" containerID="85ae269609b66f6acf93bcc0e892e75e9ccd8c465309b6295456328f47b9fe2e" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.628783 4771 scope.go:117] "RemoveContainer" containerID="400548e21e6cb7a964a4f566a2e24cd4d7acf2c4e96c6d4b2f7613a34f9b38d2" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.664287 4771 scope.go:117] "RemoveContainer" containerID="2655d8cca67d1ed15219795cd0ffb7097d686a9d9f93a959d0747d50b6d56da1" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.690738 4771 scope.go:117] "RemoveContainer" containerID="87d226406f3060a5893186f01c0a01237893b6d71a1105bff8164c0c403aa820" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.713128 4771 scope.go:117] "RemoveContainer" containerID="38a2ec8809c518853f049533d324593df7c998b6605d5edd12e9b8b730dbf454" Jan 23 14:00:48 crc kubenswrapper[4771]: I0123 14:00:48.738943 4771 scope.go:117] "RemoveContainer" containerID="4cd87ed3432a8babfe473cb57e1ce5be121c129f12ae8277c20d11edf3f9b680" Jan 23 14:00:53 crc kubenswrapper[4771]: I0123 14:00:53.228978 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:00:53 crc kubenswrapper[4771]: E0123 14:00:53.230974 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:00:54 crc kubenswrapper[4771]: I0123 14:00:54.042710 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wt562"] Jan 23 14:00:54 crc kubenswrapper[4771]: I0123 14:00:54.053817 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wt562"] Jan 23 14:00:55 crc kubenswrapper[4771]: I0123 14:00:55.252405 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16dd137-a1ef-4b9f-b6b9-8b70b3908db1" path="/var/lib/kubelet/pods/b16dd137-a1ef-4b9f-b6b9-8b70b3908db1/volumes" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.163598 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486281-v84cc"] Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.166732 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.181701 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486281-v84cc"] Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.316755 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.316956 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.317040 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.317116 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngh5\" (UniqueName: \"kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.419994 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.420344 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.420560 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngh5\" (UniqueName: \"kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.421113 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.428635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.429871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.437440 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.438630 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngh5\" (UniqueName: \"kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5\") pod \"keystone-cron-29486281-v84cc\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:00 crc kubenswrapper[4771]: I0123 14:01:00.494628 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:01 crc kubenswrapper[4771]: I0123 14:01:01.019337 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486281-v84cc"] Jan 23 14:01:01 crc kubenswrapper[4771]: W0123 14:01:01.025787 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d9b7bd_62ab_4e1c_bbf4_d8b4fb440afd.slice/crio-bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a WatchSource:0}: Error finding container bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a: Status 404 returned error can't find the container with id bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a Jan 23 14:01:01 crc kubenswrapper[4771]: I0123 14:01:01.842818 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486281-v84cc" event={"ID":"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd","Type":"ContainerStarted","Data":"0a15d0362f495222273e460c780dc02a21b06634edcd7c3e1ce4210946176668"} Jan 23 14:01:01 crc kubenswrapper[4771]: I0123 14:01:01.843186 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486281-v84cc" event={"ID":"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd","Type":"ContainerStarted","Data":"bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a"} Jan 23 14:01:01 crc kubenswrapper[4771]: I0123 14:01:01.865487 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486281-v84cc" podStartSLOduration=1.865465404 podStartE2EDuration="1.865465404s" podCreationTimestamp="2026-01-23 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:01:01.864578135 +0000 UTC m=+1702.887115760" watchObservedRunningTime="2026-01-23 14:01:01.865465404 +0000 UTC m=+1702.888003049" Jan 23 14:01:04 crc kubenswrapper[4771]: I0123 14:01:04.880613 4771 generic.go:334] "Generic (PLEG): container finished" podID="22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" containerID="0a15d0362f495222273e460c780dc02a21b06634edcd7c3e1ce4210946176668" exitCode=0 Jan 23 14:01:04 crc kubenswrapper[4771]: I0123 14:01:04.880718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486281-v84cc" event={"ID":"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd","Type":"ContainerDied","Data":"0a15d0362f495222273e460c780dc02a21b06634edcd7c3e1ce4210946176668"} Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.270597 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.379251 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys\") pod \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.379386 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data\") pod \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.379433 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle\") pod \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.379505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pngh5\" (UniqueName: \"kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5\") pod \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\" (UID: \"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd\") " Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.388114 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" (UID: "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.389818 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5" (OuterVolumeSpecName: "kube-api-access-pngh5") pod "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" (UID: "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd"). InnerVolumeSpecName "kube-api-access-pngh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.415728 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" (UID: "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.444201 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data" (OuterVolumeSpecName: "config-data") pod "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" (UID: "22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.483634 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.483680 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.483700 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pngh5\" (UniqueName: \"kubernetes.io/projected/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-kube-api-access-pngh5\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.483713 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.907225 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486281-v84cc" event={"ID":"22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd","Type":"ContainerDied","Data":"bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a"} Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.907654 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbcde0690065d0da5042d344c399deb831bc8832f5cdbf41ae43464fc09a2e4a" Jan 23 14:01:06 crc kubenswrapper[4771]: I0123 14:01:06.907369 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486281-v84cc" Jan 23 14:01:07 crc kubenswrapper[4771]: I0123 14:01:07.228431 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:01:07 crc kubenswrapper[4771]: E0123 14:01:07.228773 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:01:17 crc kubenswrapper[4771]: I0123 14:01:17.058291 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-mzql6"] Jan 23 14:01:17 crc kubenswrapper[4771]: I0123 14:01:17.071084 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-mzql6"] Jan 23 14:01:17 crc kubenswrapper[4771]: I0123 14:01:17.241288 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a326387-5c33-41e0-b73a-8670ae5b0c48" path="/var/lib/kubelet/pods/6a326387-5c33-41e0-b73a-8670ae5b0c48/volumes" Jan 23 14:01:20 crc kubenswrapper[4771]: I0123 14:01:20.230335 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:01:20 crc kubenswrapper[4771]: E0123 14:01:20.231508 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:01:32 crc kubenswrapper[4771]: I0123 14:01:32.050793 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-47x2v"] Jan 23 14:01:32 crc kubenswrapper[4771]: I0123 14:01:32.068384 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-47x2v"] Jan 23 14:01:32 crc kubenswrapper[4771]: I0123 14:01:32.229016 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:01:32 crc kubenswrapper[4771]: E0123 14:01:32.229333 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:01:33 crc kubenswrapper[4771]: I0123 14:01:33.266642 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa3ff81-43f1-4fcb-8c40-95d7aa786a06" path="/var/lib/kubelet/pods/5aa3ff81-43f1-4fcb-8c40-95d7aa786a06/volumes" Jan 23 14:01:34 crc kubenswrapper[4771]: I0123 14:01:34.058506 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8cggt"] Jan 23 14:01:34 crc kubenswrapper[4771]: I0123 14:01:34.068874 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8cggt"] Jan 23 14:01:35 crc kubenswrapper[4771]: I0123 14:01:35.242639 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fefc0e-a90c-4550-8f94-6e392f6bc6fc" path="/var/lib/kubelet/pods/e7fefc0e-a90c-4550-8f94-6e392f6bc6fc/volumes" Jan 23 14:01:45 crc kubenswrapper[4771]: I0123 14:01:45.052704 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-cz4ft"] Jan 23 14:01:45 crc kubenswrapper[4771]: I0123 14:01:45.065640 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-cz4ft"] Jan 23 14:01:45 crc kubenswrapper[4771]: I0123 14:01:45.243034 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fcfc471-7906-46f5-9238-4d66823ca1bf" path="/var/lib/kubelet/pods/8fcfc471-7906-46f5-9238-4d66823ca1bf/volumes" Jan 23 14:01:47 crc kubenswrapper[4771]: I0123 14:01:47.032835 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hbkvm"] Jan 23 14:01:47 crc kubenswrapper[4771]: I0123 14:01:47.041700 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hbkvm"] Jan 23 14:01:47 crc kubenswrapper[4771]: I0123 14:01:47.228332 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:01:47 crc kubenswrapper[4771]: E0123 14:01:47.228730 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:01:47 crc kubenswrapper[4771]: I0123 14:01:47.243424 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515" path="/var/lib/kubelet/pods/d8fe2dfb-8c93-4c82-bbc8-b24a3b6c6515/volumes" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.043144 4771 scope.go:117] "RemoveContainer" containerID="92701b8d11acf8cca66d4b3e154f17a6044b28ecdabb713991336418b4fe8a9d" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.084551 4771 scope.go:117] "RemoveContainer" containerID="ff5c952fbb74d6adf2903aa176ab04d30d75225d33f3e73b0e3647bb6ef2aa04" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.146367 4771 scope.go:117] "RemoveContainer" containerID="97c11b5aaa731ba63fb076b57926582f57dedfc2ab9ddff5231e4899e9baa2cd" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.199467 4771 scope.go:117] "RemoveContainer" containerID="505a60d53313a2366a13eef0c7e455aa06a9618df0087a01c25029baeac3749f" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.258881 4771 scope.go:117] "RemoveContainer" containerID="0be3692ff5636522a613b344a75d8813336b721bc19eaaee49433889018a304e" Jan 23 14:01:49 crc kubenswrapper[4771]: I0123 14:01:49.307988 4771 scope.go:117] "RemoveContainer" containerID="7a7a23b53d954ad91a5b4531d71241368e6fd0a4546f105836c13cfe2ff7c43d" Jan 23 14:01:55 crc kubenswrapper[4771]: I0123 14:01:55.469125 4771 generic.go:334] "Generic (PLEG): container finished" podID="efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" containerID="dbabb3be0c79f818b1c0cd6cf39b1e54de85c308b37ba1c215df402d389c7f8d" exitCode=0 Jan 23 14:01:55 crc kubenswrapper[4771]: I0123 14:01:55.469232 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" event={"ID":"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1","Type":"ContainerDied","Data":"dbabb3be0c79f818b1c0cd6cf39b1e54de85c308b37ba1c215df402d389c7f8d"} Jan 23 14:01:56 crc kubenswrapper[4771]: I0123 14:01:56.978861 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.108235 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqmhf\" (UniqueName: \"kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf\") pod \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.108445 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory\") pod \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.108476 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam\") pod \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\" (UID: \"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1\") " Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.116937 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf" (OuterVolumeSpecName: "kube-api-access-gqmhf") pod "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" (UID: "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1"). InnerVolumeSpecName "kube-api-access-gqmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.144880 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" (UID: "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.146780 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory" (OuterVolumeSpecName: "inventory") pod "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" (UID: "efe8756a-9628-43ad-a9f1-7ff7e65c5fc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.212004 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqmhf\" (UniqueName: \"kubernetes.io/projected/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-kube-api-access-gqmhf\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.212061 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.212072 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/efe8756a-9628-43ad-a9f1-7ff7e65c5fc1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.494895 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" event={"ID":"efe8756a-9628-43ad-a9f1-7ff7e65c5fc1","Type":"ContainerDied","Data":"1e3a7ba1abb863b2be0a67eedbe3e2043ef82c3be7f2121e257901ec1a76149f"} Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.494976 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e3a7ba1abb863b2be0a67eedbe3e2043ef82c3be7f2121e257901ec1a76149f" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.494983 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-dspqs" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.604699 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm"] Jan 23 14:01:57 crc kubenswrapper[4771]: E0123 14:01:57.605856 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" containerName="keystone-cron" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.605881 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" containerName="keystone-cron" Jan 23 14:01:57 crc kubenswrapper[4771]: E0123 14:01:57.605916 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.605924 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.606149 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd" containerName="keystone-cron" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.606175 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe8756a-9628-43ad-a9f1-7ff7e65c5fc1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.607022 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.609693 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.609992 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.610212 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.610508 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.629613 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm"] Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.631864 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2rc\" (UniqueName: \"kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.632314 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.632745 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.734908 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2rc\" (UniqueName: \"kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.735068 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.735211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.740877 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:57 crc kubenswrapper[4771]: I0123 14:01:57.749216 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:58 crc kubenswrapper[4771]: I0123 14:01:58.230708 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:01:58 crc kubenswrapper[4771]: E0123 14:01:58.232300 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:01:58 crc kubenswrapper[4771]: I0123 14:01:58.630403 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2rc\" (UniqueName: \"kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:58 crc kubenswrapper[4771]: I0123 14:01:58.828636 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:01:59 crc kubenswrapper[4771]: I0123 14:01:59.061891 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-42qfl"] Jan 23 14:01:59 crc kubenswrapper[4771]: I0123 14:01:59.078624 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-42qfl"] Jan 23 14:01:59 crc kubenswrapper[4771]: I0123 14:01:59.244290 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f63357-c0a0-49eb-9011-bd32c84f414a" path="/var/lib/kubelet/pods/13f63357-c0a0-49eb-9011-bd32c84f414a/volumes" Jan 23 14:01:59 crc kubenswrapper[4771]: I0123 14:01:59.470548 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm"] Jan 23 14:01:59 crc kubenswrapper[4771]: I0123 14:01:59.520022 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" event={"ID":"c9057e27-502a-48d6-b1d5-0fc8e198ab78","Type":"ContainerStarted","Data":"68bf3adb4ab3eed276b9382e7357350e5bbc61a6e2d983aa675e68447ca9a6b4"} Jan 23 14:02:00 crc kubenswrapper[4771]: I0123 14:02:00.532794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" event={"ID":"c9057e27-502a-48d6-b1d5-0fc8e198ab78","Type":"ContainerStarted","Data":"d56d94b0545abf5300a74140e06002bdcc82977acdd5cf6a37c85912a16a1d03"} Jan 23 14:02:00 crc kubenswrapper[4771]: I0123 14:02:00.560574 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" podStartSLOduration=3.150227885 podStartE2EDuration="3.560532702s" podCreationTimestamp="2026-01-23 14:01:57 +0000 UTC" firstStartedPulling="2026-01-23 14:01:59.478881665 +0000 UTC m=+1760.501419290" lastFinishedPulling="2026-01-23 14:01:59.889186442 +0000 UTC m=+1760.911724107" observedRunningTime="2026-01-23 14:02:00.553691654 +0000 UTC m=+1761.576229279" watchObservedRunningTime="2026-01-23 14:02:00.560532702 +0000 UTC m=+1761.583070367" Jan 23 14:02:06 crc kubenswrapper[4771]: I0123 14:02:06.031855 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d7jd6"] Jan 23 14:02:06 crc kubenswrapper[4771]: I0123 14:02:06.053570 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d7jd6"] Jan 23 14:02:07 crc kubenswrapper[4771]: I0123 14:02:07.243478 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="506b2de1-f73d-4781-a52d-3f622c78660d" path="/var/lib/kubelet/pods/506b2de1-f73d-4781-a52d-3f622c78660d/volumes" Jan 23 14:02:11 crc kubenswrapper[4771]: I0123 14:02:11.228583 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:02:11 crc kubenswrapper[4771]: E0123 14:02:11.229522 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:02:23 crc kubenswrapper[4771]: I0123 14:02:23.228617 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:02:23 crc kubenswrapper[4771]: E0123 14:02:23.229492 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:02:36 crc kubenswrapper[4771]: I0123 14:02:36.228314 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:02:36 crc kubenswrapper[4771]: E0123 14:02:36.229929 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:02:49 crc kubenswrapper[4771]: I0123 14:02:49.497635 4771 scope.go:117] "RemoveContainer" containerID="5b362fa2ace7a4e0d64395ac447be66d9a4f4db474d562bad46dae327de84513" Jan 23 14:02:49 crc kubenswrapper[4771]: I0123 14:02:49.540093 4771 scope.go:117] "RemoveContainer" containerID="1250cdcdd562bec8790972936e00f99cde74755985fff387fe46d09a3f4d0f3e" Jan 23 14:02:50 crc kubenswrapper[4771]: I0123 14:02:50.228928 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:02:50 crc kubenswrapper[4771]: E0123 14:02:50.229195 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:03:05 crc kubenswrapper[4771]: I0123 14:03:05.229221 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.086580 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cf90-account-create-update-mgq2x"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.146540 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac2a-account-create-update-4slxw"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.159242 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lbvw8"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.171307 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-772b-account-create-update-xq297"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.181785 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xlb7n"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.195669 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lbvw8"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.212948 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cpvm8"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.223605 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cf90-account-create-update-mgq2x"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.234812 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-772b-account-create-update-xq297"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.246323 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ac2a-account-create-update-4slxw"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.258392 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606"} Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.266625 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xlb7n"] Jan 23 14:03:06 crc kubenswrapper[4771]: I0123 14:03:06.278303 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cpvm8"] Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.243435 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0369c78d-76d3-4407-bdf5-07a6c326335f" path="/var/lib/kubelet/pods/0369c78d-76d3-4407-bdf5-07a6c326335f/volumes" Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.245899 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5e4f99-16c5-43fa-8606-e4b1656e2eaf" path="/var/lib/kubelet/pods/0d5e4f99-16c5-43fa-8606-e4b1656e2eaf/volumes" Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.246837 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4579b579-c870-402e-90ca-0d37db6e919d" path="/var/lib/kubelet/pods/4579b579-c870-402e-90ca-0d37db6e919d/volumes" Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.247466 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7835af-e3df-48e7-9db2-4c5fd0f75baf" path="/var/lib/kubelet/pods/6a7835af-e3df-48e7-9db2-4c5fd0f75baf/volumes" Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.248689 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8f22a6-98e2-45a6-9589-77968163dd98" path="/var/lib/kubelet/pods/cb8f22a6-98e2-45a6-9589-77968163dd98/volumes" Jan 23 14:03:07 crc kubenswrapper[4771]: I0123 14:03:07.249361 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93e3306-16f5-4d49-88e0-0e5baef7912c" path="/var/lib/kubelet/pods/f93e3306-16f5-4d49-88e0-0e5baef7912c/volumes" Jan 23 14:03:20 crc kubenswrapper[4771]: I0123 14:03:20.416069 4771 generic.go:334] "Generic (PLEG): container finished" podID="c9057e27-502a-48d6-b1d5-0fc8e198ab78" containerID="d56d94b0545abf5300a74140e06002bdcc82977acdd5cf6a37c85912a16a1d03" exitCode=0 Jan 23 14:03:20 crc kubenswrapper[4771]: I0123 14:03:20.416155 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" event={"ID":"c9057e27-502a-48d6-b1d5-0fc8e198ab78","Type":"ContainerDied","Data":"d56d94b0545abf5300a74140e06002bdcc82977acdd5cf6a37c85912a16a1d03"} Jan 23 14:03:21 crc kubenswrapper[4771]: I0123 14:03:21.896823 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:03:21 crc kubenswrapper[4771]: I0123 14:03:21.994680 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv2rc\" (UniqueName: \"kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc\") pod \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " Jan 23 14:03:21 crc kubenswrapper[4771]: I0123 14:03:21.994770 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam\") pod \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " Jan 23 14:03:21 crc kubenswrapper[4771]: I0123 14:03:21.994836 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory\") pod \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\" (UID: \"c9057e27-502a-48d6-b1d5-0fc8e198ab78\") " Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.002437 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc" (OuterVolumeSpecName: "kube-api-access-mv2rc") pod "c9057e27-502a-48d6-b1d5-0fc8e198ab78" (UID: "c9057e27-502a-48d6-b1d5-0fc8e198ab78"). InnerVolumeSpecName "kube-api-access-mv2rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.036181 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c9057e27-502a-48d6-b1d5-0fc8e198ab78" (UID: "c9057e27-502a-48d6-b1d5-0fc8e198ab78"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.042691 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory" (OuterVolumeSpecName: "inventory") pod "c9057e27-502a-48d6-b1d5-0fc8e198ab78" (UID: "c9057e27-502a-48d6-b1d5-0fc8e198ab78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.097793 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv2rc\" (UniqueName: \"kubernetes.io/projected/c9057e27-502a-48d6-b1d5-0fc8e198ab78-kube-api-access-mv2rc\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.097832 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.097844 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c9057e27-502a-48d6-b1d5-0fc8e198ab78-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.437393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" event={"ID":"c9057e27-502a-48d6-b1d5-0fc8e198ab78","Type":"ContainerDied","Data":"68bf3adb4ab3eed276b9382e7357350e5bbc61a6e2d983aa675e68447ca9a6b4"} Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.437459 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68bf3adb4ab3eed276b9382e7357350e5bbc61a6e2d983aa675e68447ca9a6b4" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.437525 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.619296 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb"] Jan 23 14:03:22 crc kubenswrapper[4771]: E0123 14:03:22.620118 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9057e27-502a-48d6-b1d5-0fc8e198ab78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.620147 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9057e27-502a-48d6-b1d5-0fc8e198ab78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.620471 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9057e27-502a-48d6-b1d5-0fc8e198ab78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.621581 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.631685 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.632231 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.632489 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.632803 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.636891 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb"] Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.712589 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8b7s\" (UniqueName: \"kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.712737 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.712839 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.815237 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8b7s\" (UniqueName: \"kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.815316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.815368 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.821646 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.822966 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.834392 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8b7s\" (UniqueName: \"kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-96chb\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:22 crc kubenswrapper[4771]: I0123 14:03:22.953201 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:23 crc kubenswrapper[4771]: I0123 14:03:23.601655 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb"] Jan 23 14:03:23 crc kubenswrapper[4771]: W0123 14:03:23.612237 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d51499_49b6_43d8_a21f_c9984307c689.slice/crio-3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df WatchSource:0}: Error finding container 3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df: Status 404 returned error can't find the container with id 3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df Jan 23 14:03:24 crc kubenswrapper[4771]: I0123 14:03:24.465625 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" event={"ID":"45d51499-49b6-43d8-a21f-c9984307c689","Type":"ContainerStarted","Data":"3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df"} Jan 23 14:03:25 crc kubenswrapper[4771]: I0123 14:03:25.477393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" event={"ID":"45d51499-49b6-43d8-a21f-c9984307c689","Type":"ContainerStarted","Data":"ee6a8df08143395460df2d235836305496d9c664b69178a84373a06e63dd1ffd"} Jan 23 14:03:30 crc kubenswrapper[4771]: I0123 14:03:30.525764 4771 generic.go:334] "Generic (PLEG): container finished" podID="45d51499-49b6-43d8-a21f-c9984307c689" containerID="ee6a8df08143395460df2d235836305496d9c664b69178a84373a06e63dd1ffd" exitCode=0 Jan 23 14:03:30 crc kubenswrapper[4771]: I0123 14:03:30.525933 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" event={"ID":"45d51499-49b6-43d8-a21f-c9984307c689","Type":"ContainerDied","Data":"ee6a8df08143395460df2d235836305496d9c664b69178a84373a06e63dd1ffd"} Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.022753 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.177883 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam\") pod \"45d51499-49b6-43d8-a21f-c9984307c689\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.177966 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8b7s\" (UniqueName: \"kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s\") pod \"45d51499-49b6-43d8-a21f-c9984307c689\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.178377 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory\") pod \"45d51499-49b6-43d8-a21f-c9984307c689\" (UID: \"45d51499-49b6-43d8-a21f-c9984307c689\") " Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.185737 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s" (OuterVolumeSpecName: "kube-api-access-j8b7s") pod "45d51499-49b6-43d8-a21f-c9984307c689" (UID: "45d51499-49b6-43d8-a21f-c9984307c689"). InnerVolumeSpecName "kube-api-access-j8b7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.212490 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory" (OuterVolumeSpecName: "inventory") pod "45d51499-49b6-43d8-a21f-c9984307c689" (UID: "45d51499-49b6-43d8-a21f-c9984307c689"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.215336 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45d51499-49b6-43d8-a21f-c9984307c689" (UID: "45d51499-49b6-43d8-a21f-c9984307c689"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.281690 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.281734 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45d51499-49b6-43d8-a21f-c9984307c689-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.281782 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8b7s\" (UniqueName: \"kubernetes.io/projected/45d51499-49b6-43d8-a21f-c9984307c689-kube-api-access-j8b7s\") on node \"crc\" DevicePath \"\"" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.571096 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" event={"ID":"45d51499-49b6-43d8-a21f-c9984307c689","Type":"ContainerDied","Data":"3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df"} Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.571161 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3780482fa971649c852d47e07c44bada9770909af2b8790f509fee10ba24f3df" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.571845 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-96chb" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.652691 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5"] Jan 23 14:03:32 crc kubenswrapper[4771]: E0123 14:03:32.653423 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d51499-49b6-43d8-a21f-c9984307c689" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.653458 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d51499-49b6-43d8-a21f-c9984307c689" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.653774 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d51499-49b6-43d8-a21f-c9984307c689" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.655055 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.658798 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.658854 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.659117 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.659587 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.684101 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5"] Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.797515 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.797721 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m4z\" (UniqueName: \"kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.797877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.900077 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.900219 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7m4z\" (UniqueName: \"kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.900305 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.905271 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.906603 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.929956 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7m4z\" (UniqueName: \"kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fdvf5\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:32 crc kubenswrapper[4771]: I0123 14:03:32.979746 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:03:33 crc kubenswrapper[4771]: I0123 14:03:33.587826 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5"] Jan 23 14:03:34 crc kubenswrapper[4771]: I0123 14:03:34.596182 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" event={"ID":"0d60c494-e705-4c10-aabf-2d07734e9048","Type":"ContainerStarted","Data":"e1e2fd2106811d9cb04a26bfb3fc7202b1577f242a08176fe6c6731bbfa1dd13"} Jan 23 14:03:35 crc kubenswrapper[4771]: I0123 14:03:35.623137 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" event={"ID":"0d60c494-e705-4c10-aabf-2d07734e9048","Type":"ContainerStarted","Data":"6cb9023d2a29aeadd94367725f61440d7b3a56286166ea600ea7bffdb12ec0d7"} Jan 23 14:03:35 crc kubenswrapper[4771]: I0123 14:03:35.678156 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" podStartSLOduration=3.067568364 podStartE2EDuration="3.678126033s" podCreationTimestamp="2026-01-23 14:03:32 +0000 UTC" firstStartedPulling="2026-01-23 14:03:33.583005404 +0000 UTC m=+1854.605543029" lastFinishedPulling="2026-01-23 14:03:34.193563073 +0000 UTC m=+1855.216100698" observedRunningTime="2026-01-23 14:03:35.665621074 +0000 UTC m=+1856.688158719" watchObservedRunningTime="2026-01-23 14:03:35.678126033 +0000 UTC m=+1856.700663658" Jan 23 14:03:41 crc kubenswrapper[4771]: I0123 14:03:41.053486 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2bw75"] Jan 23 14:03:41 crc kubenswrapper[4771]: I0123 14:03:41.064758 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2bw75"] Jan 23 14:03:41 crc kubenswrapper[4771]: I0123 14:03:41.242476 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e875d7-8f49-4d0d-a51e-3e0c5071bafe" path="/var/lib/kubelet/pods/c7e875d7-8f49-4d0d-a51e-3e0c5071bafe/volumes" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.695545 4771 scope.go:117] "RemoveContainer" containerID="91e885c62bc81bfb5e96fb8f2db85e70c90e01806346919076366e9e1af2333d" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.725053 4771 scope.go:117] "RemoveContainer" containerID="a0b70f3cfa59f2eb3cc93c4118bb2dceb008e9f96a7d16f688f3ea29473abfbd" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.788275 4771 scope.go:117] "RemoveContainer" containerID="9a8197b30c26491834bc5e42d074fbdef69462419a59eb81b01e83debc145687" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.844980 4771 scope.go:117] "RemoveContainer" containerID="1b645f92c84dc6f4b22655ae1482cee38a23ec3b6ced2edf9b69dbacfdc05e37" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.893542 4771 scope.go:117] "RemoveContainer" containerID="39879edeb910e84f38b4564b201e6d0d3c8626ecc1d863729ecd1919da1cb16f" Jan 23 14:03:49 crc kubenswrapper[4771]: I0123 14:03:49.953978 4771 scope.go:117] "RemoveContainer" containerID="d0d6d8c46e45e94e71c2824a6b84445a4b55d6d55eb9fab5f77dda9601b4a41a" Jan 23 14:03:50 crc kubenswrapper[4771]: I0123 14:03:50.035173 4771 scope.go:117] "RemoveContainer" containerID="e94f7c254f5b712c8d84cebdab80b44c269c41165653b98362902c9f9a26a346" Jan 23 14:04:06 crc kubenswrapper[4771]: I0123 14:04:06.054747 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zchgr"] Jan 23 14:04:06 crc kubenswrapper[4771]: I0123 14:04:06.069086 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lwjw4"] Jan 23 14:04:06 crc kubenswrapper[4771]: I0123 14:04:06.082647 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zchgr"] Jan 23 14:04:06 crc kubenswrapper[4771]: I0123 14:04:06.092998 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lwjw4"] Jan 23 14:04:07 crc kubenswrapper[4771]: I0123 14:04:07.243564 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bdfda9a-c75f-412b-81f4-b33bb47d9435" path="/var/lib/kubelet/pods/3bdfda9a-c75f-412b-81f4-b33bb47d9435/volumes" Jan 23 14:04:07 crc kubenswrapper[4771]: I0123 14:04:07.244548 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb9ca3b-06d1-428d-a140-b946a9ef5931" path="/var/lib/kubelet/pods/ebb9ca3b-06d1-428d-a140-b946a9ef5931/volumes" Jan 23 14:04:18 crc kubenswrapper[4771]: I0123 14:04:18.141550 4771 generic.go:334] "Generic (PLEG): container finished" podID="0d60c494-e705-4c10-aabf-2d07734e9048" containerID="6cb9023d2a29aeadd94367725f61440d7b3a56286166ea600ea7bffdb12ec0d7" exitCode=0 Jan 23 14:04:18 crc kubenswrapper[4771]: I0123 14:04:18.141639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" event={"ID":"0d60c494-e705-4c10-aabf-2d07734e9048","Type":"ContainerDied","Data":"6cb9023d2a29aeadd94367725f61440d7b3a56286166ea600ea7bffdb12ec0d7"} Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.620055 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.714866 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7m4z\" (UniqueName: \"kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z\") pod \"0d60c494-e705-4c10-aabf-2d07734e9048\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.715057 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam\") pod \"0d60c494-e705-4c10-aabf-2d07734e9048\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.715217 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory\") pod \"0d60c494-e705-4c10-aabf-2d07734e9048\" (UID: \"0d60c494-e705-4c10-aabf-2d07734e9048\") " Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.726518 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z" (OuterVolumeSpecName: "kube-api-access-z7m4z") pod "0d60c494-e705-4c10-aabf-2d07734e9048" (UID: "0d60c494-e705-4c10-aabf-2d07734e9048"). InnerVolumeSpecName "kube-api-access-z7m4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.752018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d60c494-e705-4c10-aabf-2d07734e9048" (UID: "0d60c494-e705-4c10-aabf-2d07734e9048"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.754065 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory" (OuterVolumeSpecName: "inventory") pod "0d60c494-e705-4c10-aabf-2d07734e9048" (UID: "0d60c494-e705-4c10-aabf-2d07734e9048"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.819071 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.819120 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7m4z\" (UniqueName: \"kubernetes.io/projected/0d60c494-e705-4c10-aabf-2d07734e9048-kube-api-access-z7m4z\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:19 crc kubenswrapper[4771]: I0123 14:04:19.819139 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d60c494-e705-4c10-aabf-2d07734e9048-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.171194 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" event={"ID":"0d60c494-e705-4c10-aabf-2d07734e9048","Type":"ContainerDied","Data":"e1e2fd2106811d9cb04a26bfb3fc7202b1577f242a08176fe6c6731bbfa1dd13"} Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.171792 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e2fd2106811d9cb04a26bfb3fc7202b1577f242a08176fe6c6731bbfa1dd13" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.171274 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fdvf5" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.300361 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l"] Jan 23 14:04:20 crc kubenswrapper[4771]: E0123 14:04:20.300951 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d60c494-e705-4c10-aabf-2d07734e9048" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.300975 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d60c494-e705-4c10-aabf-2d07734e9048" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.301246 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d60c494-e705-4c10-aabf-2d07734e9048" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.302257 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.305059 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.305285 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.307093 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.311648 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.312327 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l"] Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.331629 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzw65\" (UniqueName: \"kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.331779 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.331817 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.433909 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzw65\" (UniqueName: \"kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.434047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.434104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.440587 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.440655 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.456830 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzw65\" (UniqueName: \"kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:20 crc kubenswrapper[4771]: I0123 14:04:20.621475 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:04:21 crc kubenswrapper[4771]: I0123 14:04:21.244387 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l"] Jan 23 14:04:21 crc kubenswrapper[4771]: I0123 14:04:21.248824 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:04:22 crc kubenswrapper[4771]: I0123 14:04:22.195871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" event={"ID":"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84","Type":"ContainerStarted","Data":"575f2298ba731eb5114d7ab5437401c9f6b595c74de532483708b9e167479531"} Jan 23 14:04:22 crc kubenswrapper[4771]: I0123 14:04:22.196688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" event={"ID":"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84","Type":"ContainerStarted","Data":"e6b71906a9b79b7ac91d917595141661af9a9bcd1eaabe5690f12f74cab7b01b"} Jan 23 14:04:22 crc kubenswrapper[4771]: I0123 14:04:22.220376 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" podStartSLOduration=1.698901084 podStartE2EDuration="2.220350816s" podCreationTimestamp="2026-01-23 14:04:20 +0000 UTC" firstStartedPulling="2026-01-23 14:04:21.248485235 +0000 UTC m=+1902.271022870" lastFinishedPulling="2026-01-23 14:04:21.769934977 +0000 UTC m=+1902.792472602" observedRunningTime="2026-01-23 14:04:22.210255135 +0000 UTC m=+1903.232792770" watchObservedRunningTime="2026-01-23 14:04:22.220350816 +0000 UTC m=+1903.242888441" Jan 23 14:04:50 crc kubenswrapper[4771]: I0123 14:04:50.241081 4771 scope.go:117] "RemoveContainer" containerID="82fd031f8745cac6311115cf334648b64b78793fb9d604cdd5ab14f8531e5583" Jan 23 14:04:50 crc kubenswrapper[4771]: I0123 14:04:50.312281 4771 scope.go:117] "RemoveContainer" containerID="2aad550a44cab96e05de1c3e5f32f531e3aefb34835ef16ffd27b4740199bef3" Jan 23 14:04:51 crc kubenswrapper[4771]: I0123 14:04:51.063577 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r22x"] Jan 23 14:04:51 crc kubenswrapper[4771]: I0123 14:04:51.075716 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6r22x"] Jan 23 14:04:51 crc kubenswrapper[4771]: I0123 14:04:51.245833 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b71cd5-6305-4222-ad5f-7c8899419c5f" path="/var/lib/kubelet/pods/42b71cd5-6305-4222-ad5f-7c8899419c5f/volumes" Jan 23 14:05:25 crc kubenswrapper[4771]: I0123 14:05:25.861586 4771 generic.go:334] "Generic (PLEG): container finished" podID="a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" containerID="575f2298ba731eb5114d7ab5437401c9f6b595c74de532483708b9e167479531" exitCode=0 Jan 23 14:05:25 crc kubenswrapper[4771]: I0123 14:05:25.861652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" event={"ID":"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84","Type":"ContainerDied","Data":"575f2298ba731eb5114d7ab5437401c9f6b595c74de532483708b9e167479531"} Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.313551 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.367339 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory\") pod \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.367477 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzw65\" (UniqueName: \"kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65\") pod \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.367549 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam\") pod \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\" (UID: \"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84\") " Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.374716 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65" (OuterVolumeSpecName: "kube-api-access-kzw65") pod "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" (UID: "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84"). InnerVolumeSpecName "kube-api-access-kzw65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.399048 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory" (OuterVolumeSpecName: "inventory") pod "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" (UID: "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.404738 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" (UID: "a9074e8c-55ca-48b3-ae5f-9b06c4c3da84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.470807 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.470851 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzw65\" (UniqueName: \"kubernetes.io/projected/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-kube-api-access-kzw65\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.470867 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9074e8c-55ca-48b3-ae5f-9b06c4c3da84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.884788 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" event={"ID":"a9074e8c-55ca-48b3-ae5f-9b06c4c3da84","Type":"ContainerDied","Data":"e6b71906a9b79b7ac91d917595141661af9a9bcd1eaabe5690f12f74cab7b01b"} Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.885267 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6b71906a9b79b7ac91d917595141661af9a9bcd1eaabe5690f12f74cab7b01b" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.884832 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l" Jan 23 14:05:27 crc kubenswrapper[4771]: I0123 14:05:27.999639 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4gml"] Jan 23 14:05:28 crc kubenswrapper[4771]: E0123 14:05:28.000224 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.000248 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.000535 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9074e8c-55ca-48b3-ae5f-9b06c4c3da84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.001384 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.004189 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.004676 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.004856 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.005004 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.012064 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4gml"] Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.087974 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knds\" (UniqueName: \"kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.088440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.088470 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.190692 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.190770 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.190874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5knds\" (UniqueName: \"kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.200113 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.208158 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.209397 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5knds\" (UniqueName: \"kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds\") pod \"ssh-known-hosts-edpm-deployment-j4gml\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.320035 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.722811 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-j4gml"] Jan 23 14:05:28 crc kubenswrapper[4771]: I0123 14:05:28.923283 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" event={"ID":"d3516501-2232-4e13-b529-5befbc170273","Type":"ContainerStarted","Data":"0432ef3179334a5b4716453f4793e2f9ea810cabb3d3caf8bb864bb5204a96a1"} Jan 23 14:05:30 crc kubenswrapper[4771]: I0123 14:05:30.311527 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:05:30 crc kubenswrapper[4771]: I0123 14:05:30.311896 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:05:30 crc kubenswrapper[4771]: I0123 14:05:30.947155 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" event={"ID":"d3516501-2232-4e13-b529-5befbc170273","Type":"ContainerStarted","Data":"6d7de464277ea935487c1e35dfaa3bd464048f308aa550df04341296e2c263eb"} Jan 23 14:05:30 crc kubenswrapper[4771]: I0123 14:05:30.972982 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" podStartSLOduration=3.076709988 podStartE2EDuration="3.9729595s" podCreationTimestamp="2026-01-23 14:05:27 +0000 UTC" firstStartedPulling="2026-01-23 14:05:28.74715389 +0000 UTC m=+1969.769691525" lastFinishedPulling="2026-01-23 14:05:29.643403412 +0000 UTC m=+1970.665941037" observedRunningTime="2026-01-23 14:05:30.967738434 +0000 UTC m=+1971.990276059" watchObservedRunningTime="2026-01-23 14:05:30.9729595 +0000 UTC m=+1971.995497125" Jan 23 14:05:38 crc kubenswrapper[4771]: I0123 14:05:38.025384 4771 generic.go:334] "Generic (PLEG): container finished" podID="d3516501-2232-4e13-b529-5befbc170273" containerID="6d7de464277ea935487c1e35dfaa3bd464048f308aa550df04341296e2c263eb" exitCode=0 Jan 23 14:05:38 crc kubenswrapper[4771]: I0123 14:05:38.025475 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" event={"ID":"d3516501-2232-4e13-b529-5befbc170273","Type":"ContainerDied","Data":"6d7de464277ea935487c1e35dfaa3bd464048f308aa550df04341296e2c263eb"} Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.472769 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.581384 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam\") pod \"d3516501-2232-4e13-b529-5befbc170273\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.581604 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0\") pod \"d3516501-2232-4e13-b529-5befbc170273\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.581902 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5knds\" (UniqueName: \"kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds\") pod \"d3516501-2232-4e13-b529-5befbc170273\" (UID: \"d3516501-2232-4e13-b529-5befbc170273\") " Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.589452 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds" (OuterVolumeSpecName: "kube-api-access-5knds") pod "d3516501-2232-4e13-b529-5befbc170273" (UID: "d3516501-2232-4e13-b529-5befbc170273"). InnerVolumeSpecName "kube-api-access-5knds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.613186 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3516501-2232-4e13-b529-5befbc170273" (UID: "d3516501-2232-4e13-b529-5befbc170273"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.618527 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d3516501-2232-4e13-b529-5befbc170273" (UID: "d3516501-2232-4e13-b529-5befbc170273"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.686343 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5knds\" (UniqueName: \"kubernetes.io/projected/d3516501-2232-4e13-b529-5befbc170273-kube-api-access-5knds\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.686389 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:39 crc kubenswrapper[4771]: I0123 14:05:39.686401 4771 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d3516501-2232-4e13-b529-5befbc170273-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.049396 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" event={"ID":"d3516501-2232-4e13-b529-5befbc170273","Type":"ContainerDied","Data":"0432ef3179334a5b4716453f4793e2f9ea810cabb3d3caf8bb864bb5204a96a1"} Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.049466 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0432ef3179334a5b4716453f4793e2f9ea810cabb3d3caf8bb864bb5204a96a1" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.049545 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-j4gml" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.155389 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7"] Jan 23 14:05:40 crc kubenswrapper[4771]: E0123 14:05:40.155988 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3516501-2232-4e13-b529-5befbc170273" containerName="ssh-known-hosts-edpm-deployment" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.156008 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3516501-2232-4e13-b529-5befbc170273" containerName="ssh-known-hosts-edpm-deployment" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.156271 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3516501-2232-4e13-b529-5befbc170273" containerName="ssh-known-hosts-edpm-deployment" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.157210 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.160044 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.160242 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.160540 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.162507 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.177457 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7"] Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.299496 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn6bm\" (UniqueName: \"kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.299580 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.299753 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.429047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn6bm\" (UniqueName: \"kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.429140 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.429252 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.439277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.439577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.452201 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn6bm\" (UniqueName: \"kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-fvdh7\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:40 crc kubenswrapper[4771]: I0123 14:05:40.488758 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:41 crc kubenswrapper[4771]: I0123 14:05:41.105049 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7"] Jan 23 14:05:42 crc kubenswrapper[4771]: I0123 14:05:42.073170 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" event={"ID":"1259451d-71d5-486c-9046-3f03879ecfeb","Type":"ContainerStarted","Data":"6ac57d06336238de68a2d4a1254bf437f3c59e4d40af97c24040190ee8f4111a"} Jan 23 14:05:43 crc kubenswrapper[4771]: I0123 14:05:43.086442 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" event={"ID":"1259451d-71d5-486c-9046-3f03879ecfeb","Type":"ContainerStarted","Data":"ff542705c1ed6e92158018209ae8c279826c6bd80b6432b19b2424d6c0389a52"} Jan 23 14:05:43 crc kubenswrapper[4771]: I0123 14:05:43.105455 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" podStartSLOduration=1.931932734 podStartE2EDuration="3.105404131s" podCreationTimestamp="2026-01-23 14:05:40 +0000 UTC" firstStartedPulling="2026-01-23 14:05:41.088808337 +0000 UTC m=+1982.111345962" lastFinishedPulling="2026-01-23 14:05:42.262279734 +0000 UTC m=+1983.284817359" observedRunningTime="2026-01-23 14:05:43.103491129 +0000 UTC m=+1984.126028754" watchObservedRunningTime="2026-01-23 14:05:43.105404131 +0000 UTC m=+1984.127941756" Jan 23 14:05:50 crc kubenswrapper[4771]: I0123 14:05:50.467914 4771 scope.go:117] "RemoveContainer" containerID="fc468bdb99b3ca2be9b2472b3959694668010762c839d572a1ebf7a548fc4797" Jan 23 14:05:51 crc kubenswrapper[4771]: I0123 14:05:51.188113 4771 generic.go:334] "Generic (PLEG): container finished" podID="1259451d-71d5-486c-9046-3f03879ecfeb" containerID="ff542705c1ed6e92158018209ae8c279826c6bd80b6432b19b2424d6c0389a52" exitCode=0 Jan 23 14:05:51 crc kubenswrapper[4771]: I0123 14:05:51.188173 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" event={"ID":"1259451d-71d5-486c-9046-3f03879ecfeb","Type":"ContainerDied","Data":"ff542705c1ed6e92158018209ae8c279826c6bd80b6432b19b2424d6c0389a52"} Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.678402 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.744730 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam\") pod \"1259451d-71d5-486c-9046-3f03879ecfeb\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.745022 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory\") pod \"1259451d-71d5-486c-9046-3f03879ecfeb\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.745089 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn6bm\" (UniqueName: \"kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm\") pod \"1259451d-71d5-486c-9046-3f03879ecfeb\" (UID: \"1259451d-71d5-486c-9046-3f03879ecfeb\") " Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.754132 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm" (OuterVolumeSpecName: "kube-api-access-fn6bm") pod "1259451d-71d5-486c-9046-3f03879ecfeb" (UID: "1259451d-71d5-486c-9046-3f03879ecfeb"). InnerVolumeSpecName "kube-api-access-fn6bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.781492 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1259451d-71d5-486c-9046-3f03879ecfeb" (UID: "1259451d-71d5-486c-9046-3f03879ecfeb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.793870 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory" (OuterVolumeSpecName: "inventory") pod "1259451d-71d5-486c-9046-3f03879ecfeb" (UID: "1259451d-71d5-486c-9046-3f03879ecfeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.848009 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.848055 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1259451d-71d5-486c-9046-3f03879ecfeb-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:52 crc kubenswrapper[4771]: I0123 14:05:52.848066 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn6bm\" (UniqueName: \"kubernetes.io/projected/1259451d-71d5-486c-9046-3f03879ecfeb-kube-api-access-fn6bm\") on node \"crc\" DevicePath \"\"" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.220230 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" event={"ID":"1259451d-71d5-486c-9046-3f03879ecfeb","Type":"ContainerDied","Data":"6ac57d06336238de68a2d4a1254bf437f3c59e4d40af97c24040190ee8f4111a"} Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.220636 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ac57d06336238de68a2d4a1254bf437f3c59e4d40af97c24040190ee8f4111a" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.220862 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-fvdh7" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.307691 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h"] Jan 23 14:05:53 crc kubenswrapper[4771]: E0123 14:05:53.308211 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1259451d-71d5-486c-9046-3f03879ecfeb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.308229 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1259451d-71d5-486c-9046-3f03879ecfeb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.308473 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1259451d-71d5-486c-9046-3f03879ecfeb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.309263 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.314721 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.314761 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.314970 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.315567 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.320220 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h"] Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.486254 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.486336 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx6g4\" (UniqueName: \"kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.486467 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.589286 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.589538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx6g4\" (UniqueName: \"kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.589887 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.594454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.594536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.611593 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx6g4\" (UniqueName: \"kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:53 crc kubenswrapper[4771]: I0123 14:05:53.631726 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:05:54 crc kubenswrapper[4771]: I0123 14:05:54.217046 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h"] Jan 23 14:05:54 crc kubenswrapper[4771]: I0123 14:05:54.233662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" event={"ID":"f8f8277b-12c4-47fd-994c-22994850fec0","Type":"ContainerStarted","Data":"16367b0079b3b83573a0ab0fe74a4efeed36e6f352ce2ece046df0b1df5ac178"} Jan 23 14:05:55 crc kubenswrapper[4771]: I0123 14:05:55.248841 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" event={"ID":"f8f8277b-12c4-47fd-994c-22994850fec0","Type":"ContainerStarted","Data":"f5929fed8768e76a61477d0e76acb33af3e4eec2dafec9e1835496807c0e7212"} Jan 23 14:05:55 crc kubenswrapper[4771]: I0123 14:05:55.274327 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" podStartSLOduration=1.8124743269999999 podStartE2EDuration="2.273479151s" podCreationTimestamp="2026-01-23 14:05:53 +0000 UTC" firstStartedPulling="2026-01-23 14:05:54.21870165 +0000 UTC m=+1995.241239275" lastFinishedPulling="2026-01-23 14:05:54.679706464 +0000 UTC m=+1995.702244099" observedRunningTime="2026-01-23 14:05:55.270168795 +0000 UTC m=+1996.292706420" watchObservedRunningTime="2026-01-23 14:05:55.273479151 +0000 UTC m=+1996.296016786" Jan 23 14:06:00 crc kubenswrapper[4771]: I0123 14:06:00.312493 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:06:00 crc kubenswrapper[4771]: I0123 14:06:00.313286 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:06:05 crc kubenswrapper[4771]: I0123 14:06:05.351341 4771 generic.go:334] "Generic (PLEG): container finished" podID="f8f8277b-12c4-47fd-994c-22994850fec0" containerID="f5929fed8768e76a61477d0e76acb33af3e4eec2dafec9e1835496807c0e7212" exitCode=0 Jan 23 14:06:05 crc kubenswrapper[4771]: I0123 14:06:05.351402 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" event={"ID":"f8f8277b-12c4-47fd-994c-22994850fec0","Type":"ContainerDied","Data":"f5929fed8768e76a61477d0e76acb33af3e4eec2dafec9e1835496807c0e7212"} Jan 23 14:06:06 crc kubenswrapper[4771]: I0123 14:06:06.869307 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.018920 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") pod \"f8f8277b-12c4-47fd-994c-22994850fec0\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.018962 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam\") pod \"f8f8277b-12c4-47fd-994c-22994850fec0\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.019270 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx6g4\" (UniqueName: \"kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4\") pod \"f8f8277b-12c4-47fd-994c-22994850fec0\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.027863 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4" (OuterVolumeSpecName: "kube-api-access-jx6g4") pod "f8f8277b-12c4-47fd-994c-22994850fec0" (UID: "f8f8277b-12c4-47fd-994c-22994850fec0"). InnerVolumeSpecName "kube-api-access-jx6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:07 crc kubenswrapper[4771]: E0123 14:06:07.059907 4771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory podName:f8f8277b-12c4-47fd-994c-22994850fec0 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:07.559861632 +0000 UTC m=+2008.582399257 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory") pod "f8f8277b-12c4-47fd-994c-22994850fec0" (UID: "f8f8277b-12c4-47fd-994c-22994850fec0") : error deleting /var/lib/kubelet/pods/f8f8277b-12c4-47fd-994c-22994850fec0/volume-subpaths: remove /var/lib/kubelet/pods/f8f8277b-12c4-47fd-994c-22994850fec0/volume-subpaths: no such file or directory Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.072292 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f8f8277b-12c4-47fd-994c-22994850fec0" (UID: "f8f8277b-12c4-47fd-994c-22994850fec0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.122826 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx6g4\" (UniqueName: \"kubernetes.io/projected/f8f8277b-12c4-47fd-994c-22994850fec0-kube-api-access-jx6g4\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.122862 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.375051 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" event={"ID":"f8f8277b-12c4-47fd-994c-22994850fec0","Type":"ContainerDied","Data":"16367b0079b3b83573a0ab0fe74a4efeed36e6f352ce2ece046df0b1df5ac178"} Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.375121 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16367b0079b3b83573a0ab0fe74a4efeed36e6f352ce2ece046df0b1df5ac178" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.375452 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.479871 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j"] Jan 23 14:06:07 crc kubenswrapper[4771]: E0123 14:06:07.480457 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8f8277b-12c4-47fd-994c-22994850fec0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.480482 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f8277b-12c4-47fd-994c-22994850fec0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.480747 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8f8277b-12c4-47fd-994c-22994850fec0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.481576 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.484585 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.484646 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.484707 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.484910 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.502223 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j"] Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635076 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") pod \"f8f8277b-12c4-47fd-994c-22994850fec0\" (UID: \"f8f8277b-12c4-47fd-994c-22994850fec0\") " Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635709 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635741 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635772 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635809 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ttv\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635894 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635924 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635956 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.635980 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636006 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636057 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636084 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636120 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.636144 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.640576 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory" (OuterVolumeSpecName: "inventory") pod "f8f8277b-12c4-47fd-994c-22994850fec0" (UID: "f8f8277b-12c4-47fd-994c-22994850fec0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.739919 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ttv\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740058 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740129 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740155 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740177 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740262 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740316 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740339 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740355 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740379 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.740449 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8f8277b-12c4-47fd-994c-22994850fec0-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.762613 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.763332 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.768043 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.768754 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.775470 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.786336 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.787825 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.787956 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.789365 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ttv\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.799922 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.800781 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.808484 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.823615 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:07 crc kubenswrapper[4771]: I0123 14:06:07.836141 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:08 crc kubenswrapper[4771]: I0123 14:06:08.121766 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:08 crc kubenswrapper[4771]: I0123 14:06:08.731477 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j"] Jan 23 14:06:09 crc kubenswrapper[4771]: I0123 14:06:09.408104 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" event={"ID":"331d5ea1-caae-41d0-8986-01a8e698861c","Type":"ContainerStarted","Data":"8d959f5515976f245fdeb28911925fdc31e6fb2c4516f411a8d5f8cb5921c368"} Jan 23 14:06:10 crc kubenswrapper[4771]: I0123 14:06:10.424779 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" event={"ID":"331d5ea1-caae-41d0-8986-01a8e698861c","Type":"ContainerStarted","Data":"eab09a699932f56a6dd8b9090df0111d28bde59b174d66009a2c6e05059bd629"} Jan 23 14:06:10 crc kubenswrapper[4771]: I0123 14:06:10.467841 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" podStartSLOduration=2.49241553 podStartE2EDuration="3.467818054s" podCreationTimestamp="2026-01-23 14:06:07 +0000 UTC" firstStartedPulling="2026-01-23 14:06:08.737629996 +0000 UTC m=+2009.760167611" lastFinishedPulling="2026-01-23 14:06:09.71303249 +0000 UTC m=+2010.735570135" observedRunningTime="2026-01-23 14:06:10.465569073 +0000 UTC m=+2011.488106768" watchObservedRunningTime="2026-01-23 14:06:10.467818054 +0000 UTC m=+2011.490355679" Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.312486 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.313324 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.313384 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.314564 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.314643 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606" gracePeriod=600 Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.674754 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606" exitCode=0 Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.674822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606"} Jan 23 14:06:30 crc kubenswrapper[4771]: I0123 14:06:30.675619 4771 scope.go:117] "RemoveContainer" containerID="17d33f6d810d983fe2000d946a226e2553f747f8bc5bb14673178008fd4ada40" Jan 23 14:06:31 crc kubenswrapper[4771]: I0123 14:06:31.690173 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a"} Jan 23 14:06:53 crc kubenswrapper[4771]: I0123 14:06:53.947424 4771 generic.go:334] "Generic (PLEG): container finished" podID="331d5ea1-caae-41d0-8986-01a8e698861c" containerID="eab09a699932f56a6dd8b9090df0111d28bde59b174d66009a2c6e05059bd629" exitCode=0 Jan 23 14:06:53 crc kubenswrapper[4771]: I0123 14:06:53.947454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" event={"ID":"331d5ea1-caae-41d0-8986-01a8e698861c","Type":"ContainerDied","Data":"eab09a699932f56a6dd8b9090df0111d28bde59b174d66009a2c6e05059bd629"} Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.438303 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617104 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617321 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ttv\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617466 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617510 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617570 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617603 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617657 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617717 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617803 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617829 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617856 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617881 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.617973 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.618000 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"331d5ea1-caae-41d0-8986-01a8e698861c\" (UID: \"331d5ea1-caae-41d0-8986-01a8e698861c\") " Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.627995 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.628462 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.631316 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.631790 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.631884 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv" (OuterVolumeSpecName: "kube-api-access-22ttv") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "kube-api-access-22ttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.632050 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.632120 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.632557 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.633174 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.634272 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.635133 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.641601 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.664957 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.672775 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory" (OuterVolumeSpecName: "inventory") pod "331d5ea1-caae-41d0-8986-01a8e698861c" (UID: "331d5ea1-caae-41d0-8986-01a8e698861c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729107 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729527 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729559 4771 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729575 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ttv\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-kube-api-access-22ttv\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729588 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729599 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729615 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729627 4771 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729638 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729650 4771 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/331d5ea1-caae-41d0-8986-01a8e698861c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729666 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729677 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729688 4771 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.729701 4771 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331d5ea1-caae-41d0-8986-01a8e698861c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.971302 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" event={"ID":"331d5ea1-caae-41d0-8986-01a8e698861c","Type":"ContainerDied","Data":"8d959f5515976f245fdeb28911925fdc31e6fb2c4516f411a8d5f8cb5921c368"} Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.971369 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d959f5515976f245fdeb28911925fdc31e6fb2c4516f411a8d5f8cb5921c368" Jan 23 14:06:55 crc kubenswrapper[4771]: I0123 14:06:55.971498 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.123439 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb"] Jan 23 14:06:56 crc kubenswrapper[4771]: E0123 14:06:56.124132 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331d5ea1-caae-41d0-8986-01a8e698861c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.124160 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="331d5ea1-caae-41d0-8986-01a8e698861c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.124378 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="331d5ea1-caae-41d0-8986-01a8e698861c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.125353 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.130258 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.131007 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.131150 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.131261 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.131375 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.134345 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb"] Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.241494 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kx27\" (UniqueName: \"kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.241822 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.242085 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.242165 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.242723 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.345207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.345293 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kx27\" (UniqueName: \"kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.345365 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.345426 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.345450 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.347125 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.351714 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.352221 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.353167 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.367025 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kx27\" (UniqueName: \"kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-s9bmb\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:56 crc kubenswrapper[4771]: I0123 14:06:56.454069 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:06:57 crc kubenswrapper[4771]: I0123 14:06:57.053576 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb"] Jan 23 14:06:58 crc kubenswrapper[4771]: I0123 14:06:58.006195 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" event={"ID":"161b748e-6a65-4a13-872a-5f00eb187424","Type":"ContainerStarted","Data":"2c35e29d9d3b900a9d1aee77e69b5f06442106b5497794be47116912dad80bbe"} Jan 23 14:06:58 crc kubenswrapper[4771]: I0123 14:06:58.006794 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" event={"ID":"161b748e-6a65-4a13-872a-5f00eb187424","Type":"ContainerStarted","Data":"fa6dfe9ac6ff1d6c320aa35d72e3891842874d5553827c464cdf2853c9376ea2"} Jan 23 14:06:58 crc kubenswrapper[4771]: I0123 14:06:58.046919 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" podStartSLOduration=1.531585057 podStartE2EDuration="2.046894395s" podCreationTimestamp="2026-01-23 14:06:56 +0000 UTC" firstStartedPulling="2026-01-23 14:06:57.063341253 +0000 UTC m=+2058.085878878" lastFinishedPulling="2026-01-23 14:06:57.578650591 +0000 UTC m=+2058.601188216" observedRunningTime="2026-01-23 14:06:58.033898722 +0000 UTC m=+2059.056436357" watchObservedRunningTime="2026-01-23 14:06:58.046894395 +0000 UTC m=+2059.069432020" Jan 23 14:08:11 crc kubenswrapper[4771]: I0123 14:08:11.789232 4771 generic.go:334] "Generic (PLEG): container finished" podID="161b748e-6a65-4a13-872a-5f00eb187424" containerID="2c35e29d9d3b900a9d1aee77e69b5f06442106b5497794be47116912dad80bbe" exitCode=0 Jan 23 14:08:11 crc kubenswrapper[4771]: I0123 14:08:11.789577 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" event={"ID":"161b748e-6a65-4a13-872a-5f00eb187424","Type":"ContainerDied","Data":"2c35e29d9d3b900a9d1aee77e69b5f06442106b5497794be47116912dad80bbe"} Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.269667 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.343113 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kx27\" (UniqueName: \"kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27\") pod \"161b748e-6a65-4a13-872a-5f00eb187424\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.343303 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0\") pod \"161b748e-6a65-4a13-872a-5f00eb187424\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.343625 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle\") pod \"161b748e-6a65-4a13-872a-5f00eb187424\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.343696 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory\") pod \"161b748e-6a65-4a13-872a-5f00eb187424\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.343807 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam\") pod \"161b748e-6a65-4a13-872a-5f00eb187424\" (UID: \"161b748e-6a65-4a13-872a-5f00eb187424\") " Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.351709 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27" (OuterVolumeSpecName: "kube-api-access-5kx27") pod "161b748e-6a65-4a13-872a-5f00eb187424" (UID: "161b748e-6a65-4a13-872a-5f00eb187424"). InnerVolumeSpecName "kube-api-access-5kx27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.355094 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "161b748e-6a65-4a13-872a-5f00eb187424" (UID: "161b748e-6a65-4a13-872a-5f00eb187424"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.372746 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "161b748e-6a65-4a13-872a-5f00eb187424" (UID: "161b748e-6a65-4a13-872a-5f00eb187424"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.378097 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "161b748e-6a65-4a13-872a-5f00eb187424" (UID: "161b748e-6a65-4a13-872a-5f00eb187424"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.396056 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory" (OuterVolumeSpecName: "inventory") pod "161b748e-6a65-4a13-872a-5f00eb187424" (UID: "161b748e-6a65-4a13-872a-5f00eb187424"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.449233 4771 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.449467 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.449558 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/161b748e-6a65-4a13-872a-5f00eb187424-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.449629 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kx27\" (UniqueName: \"kubernetes.io/projected/161b748e-6a65-4a13-872a-5f00eb187424-kube-api-access-5kx27\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.449705 4771 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/161b748e-6a65-4a13-872a-5f00eb187424-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.812561 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" event={"ID":"161b748e-6a65-4a13-872a-5f00eb187424","Type":"ContainerDied","Data":"fa6dfe9ac6ff1d6c320aa35d72e3891842874d5553827c464cdf2853c9376ea2"} Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.812616 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6dfe9ac6ff1d6c320aa35d72e3891842874d5553827c464cdf2853c9376ea2" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.812688 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-s9bmb" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.918091 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt"] Jan 23 14:08:13 crc kubenswrapper[4771]: E0123 14:08:13.918731 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161b748e-6a65-4a13-872a-5f00eb187424" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.918758 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="161b748e-6a65-4a13-872a-5f00eb187424" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.918979 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="161b748e-6a65-4a13-872a-5f00eb187424" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.919823 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.924841 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.925499 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.925642 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.926184 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.926198 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.927430 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:08:13 crc kubenswrapper[4771]: I0123 14:08:13.931703 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt"] Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.063362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.063440 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.063511 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.063549 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.064117 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.064294 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjs6t\" (UniqueName: \"kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.166957 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.167052 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjs6t\" (UniqueName: \"kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.167098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.167136 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.167189 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.167235 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.172245 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.172769 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.172871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.174977 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.175233 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.189271 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjs6t\" (UniqueName: \"kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.240184 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:08:14 crc kubenswrapper[4771]: I0123 14:08:14.840618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt"] Jan 23 14:08:15 crc kubenswrapper[4771]: I0123 14:08:15.834575 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" event={"ID":"0ec79d97-ee55-489b-935e-51ae32de7ca3","Type":"ContainerStarted","Data":"a74e5d2109741f736d23cf420f2c46a5c40c6e56c5222e893c23e9c5ba49144b"} Jan 23 14:08:16 crc kubenswrapper[4771]: I0123 14:08:16.848403 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" event={"ID":"0ec79d97-ee55-489b-935e-51ae32de7ca3","Type":"ContainerStarted","Data":"08787c2c7ea04790659242b6baf60e5da670e4cf7e751cc3ad08c7050c3bd184"} Jan 23 14:08:16 crc kubenswrapper[4771]: I0123 14:08:16.869821 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" podStartSLOduration=2.889138692 podStartE2EDuration="3.869791303s" podCreationTimestamp="2026-01-23 14:08:13 +0000 UTC" firstStartedPulling="2026-01-23 14:08:14.849033527 +0000 UTC m=+2135.871571162" lastFinishedPulling="2026-01-23 14:08:15.829686148 +0000 UTC m=+2136.852223773" observedRunningTime="2026-01-23 14:08:16.86466025 +0000 UTC m=+2137.887197875" watchObservedRunningTime="2026-01-23 14:08:16.869791303 +0000 UTC m=+2137.892328928" Jan 23 14:08:30 crc kubenswrapper[4771]: I0123 14:08:30.311586 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:08:30 crc kubenswrapper[4771]: I0123 14:08:30.313379 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:09:00 crc kubenswrapper[4771]: I0123 14:09:00.312283 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:09:00 crc kubenswrapper[4771]: I0123 14:09:00.313280 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:09:13 crc kubenswrapper[4771]: I0123 14:09:13.462635 4771 generic.go:334] "Generic (PLEG): container finished" podID="0ec79d97-ee55-489b-935e-51ae32de7ca3" containerID="08787c2c7ea04790659242b6baf60e5da670e4cf7e751cc3ad08c7050c3bd184" exitCode=0 Jan 23 14:09:13 crc kubenswrapper[4771]: I0123 14:09:13.462732 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" event={"ID":"0ec79d97-ee55-489b-935e-51ae32de7ca3","Type":"ContainerDied","Data":"08787c2c7ea04790659242b6baf60e5da670e4cf7e751cc3ad08c7050c3bd184"} Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.013704 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.120949 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjs6t\" (UniqueName: \"kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.121084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.121143 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.121202 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.121483 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.121611 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle\") pod \"0ec79d97-ee55-489b-935e-51ae32de7ca3\" (UID: \"0ec79d97-ee55-489b-935e-51ae32de7ca3\") " Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.128852 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.128892 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t" (OuterVolumeSpecName: "kube-api-access-vjs6t") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "kube-api-access-vjs6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.158588 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.159286 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory" (OuterVolumeSpecName: "inventory") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.159956 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.165132 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0ec79d97-ee55-489b-935e-51ae32de7ca3" (UID: "0ec79d97-ee55-489b-935e-51ae32de7ca3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.223992 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjs6t\" (UniqueName: \"kubernetes.io/projected/0ec79d97-ee55-489b-935e-51ae32de7ca3-kube-api-access-vjs6t\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.224480 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.224494 4771 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.224506 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.224519 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.224533 4771 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec79d97-ee55-489b-935e-51ae32de7ca3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.485078 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" event={"ID":"0ec79d97-ee55-489b-935e-51ae32de7ca3","Type":"ContainerDied","Data":"a74e5d2109741f736d23cf420f2c46a5c40c6e56c5222e893c23e9c5ba49144b"} Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.485158 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74e5d2109741f736d23cf420f2c46a5c40c6e56c5222e893c23e9c5ba49144b" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.485196 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.602959 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z"] Jan 23 14:09:15 crc kubenswrapper[4771]: E0123 14:09:15.603693 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec79d97-ee55-489b-935e-51ae32de7ca3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.603713 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec79d97-ee55-489b-935e-51ae32de7ca3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.603963 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec79d97-ee55-489b-935e-51ae32de7ca3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.604841 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.616304 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.616848 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.617030 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.617125 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.617141 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.623058 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z"] Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.742370 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzkb6\" (UniqueName: \"kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.743000 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.743164 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.743367 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.743640 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.845451 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.845538 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.845612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzkb6\" (UniqueName: \"kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.845690 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.845711 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.851628 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.852051 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.861209 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.864219 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.866202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzkb6\" (UniqueName: \"kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:15 crc kubenswrapper[4771]: I0123 14:09:15.938158 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:09:16 crc kubenswrapper[4771]: I0123 14:09:16.576272 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z"] Jan 23 14:09:17 crc kubenswrapper[4771]: I0123 14:09:17.509029 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" event={"ID":"af9e27b6-338f-471a-ae8b-041038e92cfe","Type":"ContainerStarted","Data":"3201f6fda2f93b9f3d06de559010fcee9e12a73e996551afcab2bd463470834d"} Jan 23 14:09:18 crc kubenswrapper[4771]: I0123 14:09:18.521432 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" event={"ID":"af9e27b6-338f-471a-ae8b-041038e92cfe","Type":"ContainerStarted","Data":"bf265b810903046d60c84edbdd110a7062a85d375ca49e47aa11736ce5d22e42"} Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.312111 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.312847 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.312900 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.313942 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.314003 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" gracePeriod=600 Jan 23 14:09:30 crc kubenswrapper[4771]: E0123 14:09:30.457861 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.668294 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" exitCode=0 Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.668346 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a"} Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.668388 4771 scope.go:117] "RemoveContainer" containerID="1a843b68343ae30fc8b49314e7c493c6427401850d3e744b30626dc7cd829606" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.669105 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:09:30 crc kubenswrapper[4771]: E0123 14:09:30.669428 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:09:30 crc kubenswrapper[4771]: I0123 14:09:30.706234 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" podStartSLOduration=15.000527728 podStartE2EDuration="15.706197617s" podCreationTimestamp="2026-01-23 14:09:15 +0000 UTC" firstStartedPulling="2026-01-23 14:09:16.596518776 +0000 UTC m=+2197.619056401" lastFinishedPulling="2026-01-23 14:09:17.302188665 +0000 UTC m=+2198.324726290" observedRunningTime="2026-01-23 14:09:18.549311048 +0000 UTC m=+2199.571848673" watchObservedRunningTime="2026-01-23 14:09:30.706197617 +0000 UTC m=+2211.728735262" Jan 23 14:09:45 crc kubenswrapper[4771]: I0123 14:09:45.228435 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:09:45 crc kubenswrapper[4771]: E0123 14:09:45.229395 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.057433 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.060472 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.074822 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.156282 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2nr\" (UniqueName: \"kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.156428 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.156473 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.257905 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x2nr\" (UniqueName: \"kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.258554 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.258703 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.259091 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.259605 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.283998 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x2nr\" (UniqueName: \"kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr\") pod \"community-operators-nh6fw\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:46 crc kubenswrapper[4771]: I0123 14:09:46.393818 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:47 crc kubenswrapper[4771]: I0123 14:09:47.072967 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:09:47 crc kubenswrapper[4771]: I0123 14:09:47.879666 4771 generic.go:334] "Generic (PLEG): container finished" podID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerID="15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22" exitCode=0 Jan 23 14:09:47 crc kubenswrapper[4771]: I0123 14:09:47.880114 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerDied","Data":"15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22"} Jan 23 14:09:47 crc kubenswrapper[4771]: I0123 14:09:47.880639 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerStarted","Data":"f1f47df7e1465b1c1821e364b7309b41049caffb975e4e5cbc9b60408a0deb9c"} Jan 23 14:09:47 crc kubenswrapper[4771]: I0123 14:09:47.883239 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:09:48 crc kubenswrapper[4771]: I0123 14:09:48.894279 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerStarted","Data":"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6"} Jan 23 14:09:49 crc kubenswrapper[4771]: I0123 14:09:49.906020 4771 generic.go:334] "Generic (PLEG): container finished" podID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerID="20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6" exitCode=0 Jan 23 14:09:49 crc kubenswrapper[4771]: I0123 14:09:49.906097 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerDied","Data":"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6"} Jan 23 14:09:50 crc kubenswrapper[4771]: I0123 14:09:50.919942 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerStarted","Data":"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787"} Jan 23 14:09:50 crc kubenswrapper[4771]: I0123 14:09:50.943920 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nh6fw" podStartSLOduration=2.405037199 podStartE2EDuration="4.943890908s" podCreationTimestamp="2026-01-23 14:09:46 +0000 UTC" firstStartedPulling="2026-01-23 14:09:47.882884145 +0000 UTC m=+2228.905421760" lastFinishedPulling="2026-01-23 14:09:50.421737854 +0000 UTC m=+2231.444275469" observedRunningTime="2026-01-23 14:09:50.93911089 +0000 UTC m=+2231.961648525" watchObservedRunningTime="2026-01-23 14:09:50.943890908 +0000 UTC m=+2231.966428533" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.421117 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.424534 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.440492 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.544591 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.544706 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.544761 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hkw\" (UniqueName: \"kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.646878 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.647010 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.647057 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5hkw\" (UniqueName: \"kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.647713 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.647758 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.670551 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5hkw\" (UniqueName: \"kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw\") pod \"certified-operators-lj4kr\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:53 crc kubenswrapper[4771]: I0123 14:09:53.787077 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:09:54 crc kubenswrapper[4771]: I0123 14:09:54.242266 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:09:54 crc kubenswrapper[4771]: I0123 14:09:54.977933 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerID="b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592" exitCode=0 Jan 23 14:09:54 crc kubenswrapper[4771]: I0123 14:09:54.978016 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerDied","Data":"b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592"} Jan 23 14:09:54 crc kubenswrapper[4771]: I0123 14:09:54.978549 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerStarted","Data":"b1bcac56493c2d95e57c7c546b92f31460e776ec2d77dea7ac29e5addc485655"} Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.025031 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.028394 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.038244 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.119002 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.119212 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.119266 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8x8q\" (UniqueName: \"kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.222366 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.222523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.222571 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8x8q\" (UniqueName: \"kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.223252 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.223338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.228433 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:09:56 crc kubenswrapper[4771]: E0123 14:09:56.228847 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.244893 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8x8q\" (UniqueName: \"kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q\") pod \"redhat-operators-ddqts\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.379269 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.400160 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.400224 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:56 crc kubenswrapper[4771]: I0123 14:09:56.503365 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:57 crc kubenswrapper[4771]: I0123 14:09:57.007445 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerStarted","Data":"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2"} Jan 23 14:09:57 crc kubenswrapper[4771]: I0123 14:09:57.068352 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:09:57 crc kubenswrapper[4771]: I0123 14:09:57.402028 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:09:58 crc kubenswrapper[4771]: I0123 14:09:58.018288 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerStarted","Data":"af512ae2b6bae8efd115be4daa6fcacf12d0ca498e449682ebea89237c98e39e"} Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.031790 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerID="f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2" exitCode=0 Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.031888 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerDied","Data":"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2"} Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.034299 4771 generic.go:334] "Generic (PLEG): container finished" podID="a20e3de0-1946-423d-b59a-be9866f97797" containerID="1497dd4dfdfcc38d39cc4100761d6e5886798032d6e56c87576955e1d3c95513" exitCode=0 Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.034345 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerDied","Data":"1497dd4dfdfcc38d39cc4100761d6e5886798032d6e56c87576955e1d3c95513"} Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.416753 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.417392 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nh6fw" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="registry-server" containerID="cri-o://a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787" gracePeriod=2 Jan 23 14:09:59 crc kubenswrapper[4771]: I0123 14:09:59.952645 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.046586 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content\") pod \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.046657 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities\") pod \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.046818 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x2nr\" (UniqueName: \"kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr\") pod \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\" (UID: \"5c6ef751-24a3-4f66-afe7-dc6c175fc306\") " Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.048772 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities" (OuterVolumeSpecName: "utilities") pod "5c6ef751-24a3-4f66-afe7-dc6c175fc306" (UID: "5c6ef751-24a3-4f66-afe7-dc6c175fc306"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.053260 4771 generic.go:334] "Generic (PLEG): container finished" podID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerID="a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787" exitCode=0 Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.053341 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh6fw" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.053342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerDied","Data":"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787"} Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.053463 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh6fw" event={"ID":"5c6ef751-24a3-4f66-afe7-dc6c175fc306","Type":"ContainerDied","Data":"f1f47df7e1465b1c1821e364b7309b41049caffb975e4e5cbc9b60408a0deb9c"} Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.053493 4771 scope.go:117] "RemoveContainer" containerID="a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.054804 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr" (OuterVolumeSpecName: "kube-api-access-6x2nr") pod "5c6ef751-24a3-4f66-afe7-dc6c175fc306" (UID: "5c6ef751-24a3-4f66-afe7-dc6c175fc306"). InnerVolumeSpecName "kube-api-access-6x2nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.057331 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerStarted","Data":"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73"} Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.087298 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lj4kr" podStartSLOduration=2.52132726 podStartE2EDuration="7.087271146s" podCreationTimestamp="2026-01-23 14:09:53 +0000 UTC" firstStartedPulling="2026-01-23 14:09:54.980239049 +0000 UTC m=+2236.002776674" lastFinishedPulling="2026-01-23 14:09:59.546182935 +0000 UTC m=+2240.568720560" observedRunningTime="2026-01-23 14:10:00.084718047 +0000 UTC m=+2241.107255692" watchObservedRunningTime="2026-01-23 14:10:00.087271146 +0000 UTC m=+2241.109808771" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.125103 4771 scope.go:117] "RemoveContainer" containerID="20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.126190 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c6ef751-24a3-4f66-afe7-dc6c175fc306" (UID: "5c6ef751-24a3-4f66-afe7-dc6c175fc306"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.150048 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.150162 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c6ef751-24a3-4f66-afe7-dc6c175fc306-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.150176 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x2nr\" (UniqueName: \"kubernetes.io/projected/5c6ef751-24a3-4f66-afe7-dc6c175fc306-kube-api-access-6x2nr\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.150097 4771 scope.go:117] "RemoveContainer" containerID="15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.172658 4771 scope.go:117] "RemoveContainer" containerID="a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787" Jan 23 14:10:00 crc kubenswrapper[4771]: E0123 14:10:00.173242 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787\": container with ID starting with a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787 not found: ID does not exist" containerID="a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.173298 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787"} err="failed to get container status \"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787\": rpc error: code = NotFound desc = could not find container \"a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787\": container with ID starting with a1a92d33f86190da2149cb5ea2767bb6975be542ac3d8d3bcc2f4dba584c4787 not found: ID does not exist" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.173343 4771 scope.go:117] "RemoveContainer" containerID="20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6" Jan 23 14:10:00 crc kubenswrapper[4771]: E0123 14:10:00.173758 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6\": container with ID starting with 20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6 not found: ID does not exist" containerID="20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.173826 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6"} err="failed to get container status \"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6\": rpc error: code = NotFound desc = could not find container \"20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6\": container with ID starting with 20986236d9dd19f6208c476f7b9bd32672866a14e591c928a9d2d0c991ee6ac6 not found: ID does not exist" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.173867 4771 scope.go:117] "RemoveContainer" containerID="15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22" Jan 23 14:10:00 crc kubenswrapper[4771]: E0123 14:10:00.174206 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22\": container with ID starting with 15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22 not found: ID does not exist" containerID="15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.174321 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22"} err="failed to get container status \"15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22\": rpc error: code = NotFound desc = could not find container \"15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22\": container with ID starting with 15fa26d1aec4506d17455a66aac6e68993f0242db751a9949a97befe21fcab22 not found: ID does not exist" Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.393179 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:10:00 crc kubenswrapper[4771]: I0123 14:10:00.409133 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nh6fw"] Jan 23 14:10:01 crc kubenswrapper[4771]: I0123 14:10:01.077588 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerStarted","Data":"6ce5e845f6c48c5db38fca91de3aef3e5452fe97a5ae4c4172e502e2eee7a501"} Jan 23 14:10:01 crc kubenswrapper[4771]: I0123 14:10:01.247196 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" path="/var/lib/kubelet/pods/5c6ef751-24a3-4f66-afe7-dc6c175fc306/volumes" Jan 23 14:10:03 crc kubenswrapper[4771]: I0123 14:10:03.787400 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:03 crc kubenswrapper[4771]: I0123 14:10:03.790578 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:04 crc kubenswrapper[4771]: I0123 14:10:04.114211 4771 generic.go:334] "Generic (PLEG): container finished" podID="a20e3de0-1946-423d-b59a-be9866f97797" containerID="6ce5e845f6c48c5db38fca91de3aef3e5452fe97a5ae4c4172e502e2eee7a501" exitCode=0 Jan 23 14:10:04 crc kubenswrapper[4771]: I0123 14:10:04.114283 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerDied","Data":"6ce5e845f6c48c5db38fca91de3aef3e5452fe97a5ae4c4172e502e2eee7a501"} Jan 23 14:10:04 crc kubenswrapper[4771]: I0123 14:10:04.839589 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lj4kr" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="registry-server" probeResult="failure" output=< Jan 23 14:10:04 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:10:04 crc kubenswrapper[4771]: > Jan 23 14:10:06 crc kubenswrapper[4771]: I0123 14:10:06.145204 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerStarted","Data":"63ff1868b22a19cae1dfdeaab40721b37bf6ad7a1793cd288cd284461d12e7d6"} Jan 23 14:10:06 crc kubenswrapper[4771]: I0123 14:10:06.175622 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddqts" podStartSLOduration=5.323769316 podStartE2EDuration="11.175597527s" podCreationTimestamp="2026-01-23 14:09:55 +0000 UTC" firstStartedPulling="2026-01-23 14:09:59.035755004 +0000 UTC m=+2240.058292629" lastFinishedPulling="2026-01-23 14:10:04.887583185 +0000 UTC m=+2245.910120840" observedRunningTime="2026-01-23 14:10:06.167137824 +0000 UTC m=+2247.189675459" watchObservedRunningTime="2026-01-23 14:10:06.175597527 +0000 UTC m=+2247.198135152" Jan 23 14:10:06 crc kubenswrapper[4771]: I0123 14:10:06.380755 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:06 crc kubenswrapper[4771]: I0123 14:10:06.380831 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:07 crc kubenswrapper[4771]: I0123 14:10:07.430907 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddqts" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="registry-server" probeResult="failure" output=< Jan 23 14:10:07 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:10:07 crc kubenswrapper[4771]: > Jan 23 14:10:08 crc kubenswrapper[4771]: I0123 14:10:08.229059 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:10:08 crc kubenswrapper[4771]: E0123 14:10:08.229742 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:10:13 crc kubenswrapper[4771]: I0123 14:10:13.861892 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:13 crc kubenswrapper[4771]: I0123 14:10:13.928888 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:14 crc kubenswrapper[4771]: I0123 14:10:14.108450 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.247146 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lj4kr" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="registry-server" containerID="cri-o://132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73" gracePeriod=2 Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.779834 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.838938 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content\") pod \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.839167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5hkw\" (UniqueName: \"kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw\") pod \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.839488 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities\") pod \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\" (UID: \"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f\") " Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.841476 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities" (OuterVolumeSpecName: "utilities") pod "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" (UID: "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.848755 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw" (OuterVolumeSpecName: "kube-api-access-x5hkw") pod "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" (UID: "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f"). InnerVolumeSpecName "kube-api-access-x5hkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.888427 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" (UID: "c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.942815 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.942858 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:15 crc kubenswrapper[4771]: I0123 14:10:15.942874 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5hkw\" (UniqueName: \"kubernetes.io/projected/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f-kube-api-access-x5hkw\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.262757 4771 generic.go:334] "Generic (PLEG): container finished" podID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerID="132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73" exitCode=0 Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.262809 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerDied","Data":"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73"} Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.262851 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lj4kr" event={"ID":"c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f","Type":"ContainerDied","Data":"b1bcac56493c2d95e57c7c546b92f31460e776ec2d77dea7ac29e5addc485655"} Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.262875 4771 scope.go:117] "RemoveContainer" containerID="132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.262904 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lj4kr" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.304499 4771 scope.go:117] "RemoveContainer" containerID="f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.307917 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.319957 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lj4kr"] Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.332485 4771 scope.go:117] "RemoveContainer" containerID="b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.399338 4771 scope.go:117] "RemoveContainer" containerID="132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73" Jan 23 14:10:16 crc kubenswrapper[4771]: E0123 14:10:16.400027 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73\": container with ID starting with 132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73 not found: ID does not exist" containerID="132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.400072 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73"} err="failed to get container status \"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73\": rpc error: code = NotFound desc = could not find container \"132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73\": container with ID starting with 132accdbd280659d18a07aa33d07e229b8fcb785f060e4f543a16966cbed2d73 not found: ID does not exist" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.400107 4771 scope.go:117] "RemoveContainer" containerID="f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2" Jan 23 14:10:16 crc kubenswrapper[4771]: E0123 14:10:16.400433 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2\": container with ID starting with f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2 not found: ID does not exist" containerID="f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.400470 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2"} err="failed to get container status \"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2\": rpc error: code = NotFound desc = could not find container \"f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2\": container with ID starting with f45bc4e87cf76e0feffcc67dff871406a79349741c3e47af16cfeb40c36ef3c2 not found: ID does not exist" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.400493 4771 scope.go:117] "RemoveContainer" containerID="b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592" Jan 23 14:10:16 crc kubenswrapper[4771]: E0123 14:10:16.401058 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592\": container with ID starting with b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592 not found: ID does not exist" containerID="b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.401093 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592"} err="failed to get container status \"b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592\": rpc error: code = NotFound desc = could not find container \"b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592\": container with ID starting with b519cd455f5f18aa2ee4590a38d548f7083da7488fe77d868158da723e26b592 not found: ID does not exist" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.440244 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:16 crc kubenswrapper[4771]: I0123 14:10:16.501758 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:17 crc kubenswrapper[4771]: I0123 14:10:17.242609 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" path="/var/lib/kubelet/pods/c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f/volumes" Jan 23 14:10:18 crc kubenswrapper[4771]: I0123 14:10:18.912490 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:10:18 crc kubenswrapper[4771]: I0123 14:10:18.913752 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ddqts" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="registry-server" containerID="cri-o://63ff1868b22a19cae1dfdeaab40721b37bf6ad7a1793cd288cd284461d12e7d6" gracePeriod=2 Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.301696 4771 generic.go:334] "Generic (PLEG): container finished" podID="a20e3de0-1946-423d-b59a-be9866f97797" containerID="63ff1868b22a19cae1dfdeaab40721b37bf6ad7a1793cd288cd284461d12e7d6" exitCode=0 Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.301776 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerDied","Data":"63ff1868b22a19cae1dfdeaab40721b37bf6ad7a1793cd288cd284461d12e7d6"} Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.419715 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.437144 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities\") pod \"a20e3de0-1946-423d-b59a-be9866f97797\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.437288 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content\") pod \"a20e3de0-1946-423d-b59a-be9866f97797\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.437527 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8x8q\" (UniqueName: \"kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q\") pod \"a20e3de0-1946-423d-b59a-be9866f97797\" (UID: \"a20e3de0-1946-423d-b59a-be9866f97797\") " Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.438884 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities" (OuterVolumeSpecName: "utilities") pod "a20e3de0-1946-423d-b59a-be9866f97797" (UID: "a20e3de0-1946-423d-b59a-be9866f97797"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.455269 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q" (OuterVolumeSpecName: "kube-api-access-m8x8q") pod "a20e3de0-1946-423d-b59a-be9866f97797" (UID: "a20e3de0-1946-423d-b59a-be9866f97797"). InnerVolumeSpecName "kube-api-access-m8x8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.541314 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8x8q\" (UniqueName: \"kubernetes.io/projected/a20e3de0-1946-423d-b59a-be9866f97797-kube-api-access-m8x8q\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.541395 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.594291 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a20e3de0-1946-423d-b59a-be9866f97797" (UID: "a20e3de0-1946-423d-b59a-be9866f97797"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:19 crc kubenswrapper[4771]: I0123 14:10:19.644503 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a20e3de0-1946-423d-b59a-be9866f97797-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.315902 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddqts" event={"ID":"a20e3de0-1946-423d-b59a-be9866f97797","Type":"ContainerDied","Data":"af512ae2b6bae8efd115be4daa6fcacf12d0ca498e449682ebea89237c98e39e"} Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.316455 4771 scope.go:117] "RemoveContainer" containerID="63ff1868b22a19cae1dfdeaab40721b37bf6ad7a1793cd288cd284461d12e7d6" Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.316178 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddqts" Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.347651 4771 scope.go:117] "RemoveContainer" containerID="6ce5e845f6c48c5db38fca91de3aef3e5452fe97a5ae4c4172e502e2eee7a501" Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.369524 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.381195 4771 scope.go:117] "RemoveContainer" containerID="1497dd4dfdfcc38d39cc4100761d6e5886798032d6e56c87576955e1d3c95513" Jan 23 14:10:20 crc kubenswrapper[4771]: I0123 14:10:20.384596 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ddqts"] Jan 23 14:10:21 crc kubenswrapper[4771]: I0123 14:10:21.247100 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20e3de0-1946-423d-b59a-be9866f97797" path="/var/lib/kubelet/pods/a20e3de0-1946-423d-b59a-be9866f97797/volumes" Jan 23 14:10:22 crc kubenswrapper[4771]: I0123 14:10:22.228794 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:10:22 crc kubenswrapper[4771]: E0123 14:10:22.229477 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:10:36 crc kubenswrapper[4771]: I0123 14:10:36.228939 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:10:36 crc kubenswrapper[4771]: E0123 14:10:36.231643 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:10:49 crc kubenswrapper[4771]: I0123 14:10:49.235699 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:10:49 crc kubenswrapper[4771]: E0123 14:10:49.237878 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.685778 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687039 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687064 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687084 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687093 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687104 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687113 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687129 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687137 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687153 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687160 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687192 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687199 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687220 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687226 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="extract-utilities" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687244 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687251 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="extract-content" Jan 23 14:10:51 crc kubenswrapper[4771]: E0123 14:10:51.687271 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687278 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687560 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1cff55c-1a17-4ee6-9ec0-9edfc6cb7f6f" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687584 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6ef751-24a3-4f66-afe7-dc6c175fc306" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.687608 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20e3de0-1946-423d-b59a-be9866f97797" containerName="registry-server" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.689706 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.710980 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.834182 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.834308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.835611 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zh8\" (UniqueName: \"kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.937850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.937980 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.938134 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zh8\" (UniqueName: \"kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.938463 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.939109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:51 crc kubenswrapper[4771]: I0123 14:10:51.975072 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zh8\" (UniqueName: \"kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8\") pod \"redhat-marketplace-6bbpw\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:52 crc kubenswrapper[4771]: I0123 14:10:52.009907 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:10:52 crc kubenswrapper[4771]: W0123 14:10:52.308206 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c2d682b_ea78_4274_a2ae_4dcab639014f.slice/crio-d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1 WatchSource:0}: Error finding container d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1: Status 404 returned error can't find the container with id d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1 Jan 23 14:10:52 crc kubenswrapper[4771]: I0123 14:10:52.308602 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:10:52 crc kubenswrapper[4771]: I0123 14:10:52.702895 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerID="5140c5bad41350c1c1bd79bd05e41f3cb3223ff16acc27e792c3b974f249926d" exitCode=0 Jan 23 14:10:52 crc kubenswrapper[4771]: I0123 14:10:52.703204 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerDied","Data":"5140c5bad41350c1c1bd79bd05e41f3cb3223ff16acc27e792c3b974f249926d"} Jan 23 14:10:52 crc kubenswrapper[4771]: I0123 14:10:52.703240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerStarted","Data":"d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1"} Jan 23 14:10:52 crc kubenswrapper[4771]: E0123 14:10:52.724984 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c2d682b_ea78_4274_a2ae_4dcab639014f.slice/crio-conmon-5140c5bad41350c1c1bd79bd05e41f3cb3223ff16acc27e792c3b974f249926d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:10:53 crc kubenswrapper[4771]: I0123 14:10:53.720841 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerStarted","Data":"6d1bd4825f225f0a7d8f0748dc2ab93966b58985faa39eff3b951c39e4ecd113"} Jan 23 14:10:54 crc kubenswrapper[4771]: I0123 14:10:54.732380 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerID="6d1bd4825f225f0a7d8f0748dc2ab93966b58985faa39eff3b951c39e4ecd113" exitCode=0 Jan 23 14:10:54 crc kubenswrapper[4771]: I0123 14:10:54.732464 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerDied","Data":"6d1bd4825f225f0a7d8f0748dc2ab93966b58985faa39eff3b951c39e4ecd113"} Jan 23 14:10:55 crc kubenswrapper[4771]: I0123 14:10:55.744046 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerStarted","Data":"b9b062ec7dbc8771e1a2584718493164ec97d84f981d63f7eed93f42a3053ddb"} Jan 23 14:10:55 crc kubenswrapper[4771]: I0123 14:10:55.775918 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bbpw" podStartSLOduration=2.111332035 podStartE2EDuration="4.775888606s" podCreationTimestamp="2026-01-23 14:10:51 +0000 UTC" firstStartedPulling="2026-01-23 14:10:52.707278846 +0000 UTC m=+2293.729816471" lastFinishedPulling="2026-01-23 14:10:55.371835427 +0000 UTC m=+2296.394373042" observedRunningTime="2026-01-23 14:10:55.767231987 +0000 UTC m=+2296.789769612" watchObservedRunningTime="2026-01-23 14:10:55.775888606 +0000 UTC m=+2296.798426231" Jan 23 14:11:02 crc kubenswrapper[4771]: I0123 14:11:02.010744 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:02 crc kubenswrapper[4771]: I0123 14:11:02.011694 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:02 crc kubenswrapper[4771]: I0123 14:11:02.064677 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:02 crc kubenswrapper[4771]: I0123 14:11:02.229120 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:11:02 crc kubenswrapper[4771]: E0123 14:11:02.229475 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:11:02 crc kubenswrapper[4771]: I0123 14:11:02.875547 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:04 crc kubenswrapper[4771]: I0123 14:11:04.075540 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:11:04 crc kubenswrapper[4771]: I0123 14:11:04.851042 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bbpw" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="registry-server" containerID="cri-o://b9b062ec7dbc8771e1a2584718493164ec97d84f981d63f7eed93f42a3053ddb" gracePeriod=2 Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.864697 4771 generic.go:334] "Generic (PLEG): container finished" podID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerID="b9b062ec7dbc8771e1a2584718493164ec97d84f981d63f7eed93f42a3053ddb" exitCode=0 Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.864781 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerDied","Data":"b9b062ec7dbc8771e1a2584718493164ec97d84f981d63f7eed93f42a3053ddb"} Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.865120 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bbpw" event={"ID":"6c2d682b-ea78-4274-a2ae-4dcab639014f","Type":"ContainerDied","Data":"d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1"} Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.865139 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b94f31f7955aeca1bf7781248811ed4c24530739b99e568ad508ac5962aff1" Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.891046 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.930047 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zh8\" (UniqueName: \"kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8\") pod \"6c2d682b-ea78-4274-a2ae-4dcab639014f\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.930304 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content\") pod \"6c2d682b-ea78-4274-a2ae-4dcab639014f\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.930531 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities\") pod \"6c2d682b-ea78-4274-a2ae-4dcab639014f\" (UID: \"6c2d682b-ea78-4274-a2ae-4dcab639014f\") " Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.932051 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities" (OuterVolumeSpecName: "utilities") pod "6c2d682b-ea78-4274-a2ae-4dcab639014f" (UID: "6c2d682b-ea78-4274-a2ae-4dcab639014f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.940375 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8" (OuterVolumeSpecName: "kube-api-access-v7zh8") pod "6c2d682b-ea78-4274-a2ae-4dcab639014f" (UID: "6c2d682b-ea78-4274-a2ae-4dcab639014f"). InnerVolumeSpecName "kube-api-access-v7zh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:05 crc kubenswrapper[4771]: I0123 14:11:05.961092 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c2d682b-ea78-4274-a2ae-4dcab639014f" (UID: "6c2d682b-ea78-4274-a2ae-4dcab639014f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.033560 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.034065 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7zh8\" (UniqueName: \"kubernetes.io/projected/6c2d682b-ea78-4274-a2ae-4dcab639014f-kube-api-access-v7zh8\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.034076 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c2d682b-ea78-4274-a2ae-4dcab639014f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.875019 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bbpw" Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.916743 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:11:06 crc kubenswrapper[4771]: I0123 14:11:06.941188 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bbpw"] Jan 23 14:11:07 crc kubenswrapper[4771]: I0123 14:11:07.242486 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" path="/var/lib/kubelet/pods/6c2d682b-ea78-4274-a2ae-4dcab639014f/volumes" Jan 23 14:11:14 crc kubenswrapper[4771]: I0123 14:11:14.228741 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:11:14 crc kubenswrapper[4771]: E0123 14:11:14.229710 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:11:26 crc kubenswrapper[4771]: I0123 14:11:26.229228 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:11:26 crc kubenswrapper[4771]: E0123 14:11:26.230293 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:11:39 crc kubenswrapper[4771]: I0123 14:11:39.246482 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:11:39 crc kubenswrapper[4771]: E0123 14:11:39.248842 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:11:50 crc kubenswrapper[4771]: I0123 14:11:50.228260 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:11:50 crc kubenswrapper[4771]: E0123 14:11:50.229085 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:12:01 crc kubenswrapper[4771]: I0123 14:12:01.228811 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:12:01 crc kubenswrapper[4771]: E0123 14:12:01.229835 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:12:15 crc kubenswrapper[4771]: I0123 14:12:15.228725 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:12:15 crc kubenswrapper[4771]: E0123 14:12:15.229691 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:12:28 crc kubenswrapper[4771]: I0123 14:12:28.228532 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:12:28 crc kubenswrapper[4771]: E0123 14:12:28.232568 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:12:42 crc kubenswrapper[4771]: I0123 14:12:42.228724 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:12:42 crc kubenswrapper[4771]: E0123 14:12:42.229644 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:12:55 crc kubenswrapper[4771]: I0123 14:12:55.229621 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:12:55 crc kubenswrapper[4771]: E0123 14:12:55.230756 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:13:10 crc kubenswrapper[4771]: I0123 14:13:10.229101 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:13:10 crc kubenswrapper[4771]: E0123 14:13:10.230598 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:13:25 crc kubenswrapper[4771]: I0123 14:13:25.228994 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:13:25 crc kubenswrapper[4771]: E0123 14:13:25.230126 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:13:38 crc kubenswrapper[4771]: I0123 14:13:38.229158 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:13:38 crc kubenswrapper[4771]: E0123 14:13:38.230082 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:13:53 crc kubenswrapper[4771]: I0123 14:13:53.228597 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:13:53 crc kubenswrapper[4771]: E0123 14:13:53.229654 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:13:55 crc kubenswrapper[4771]: I0123 14:13:55.753527 4771 generic.go:334] "Generic (PLEG): container finished" podID="af9e27b6-338f-471a-ae8b-041038e92cfe" containerID="bf265b810903046d60c84edbdd110a7062a85d375ca49e47aa11736ce5d22e42" exitCode=0 Jan 23 14:13:55 crc kubenswrapper[4771]: I0123 14:13:55.753613 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" event={"ID":"af9e27b6-338f-471a-ae8b-041038e92cfe","Type":"ContainerDied","Data":"bf265b810903046d60c84edbdd110a7062a85d375ca49e47aa11736ce5d22e42"} Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.606151 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.779874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" event={"ID":"af9e27b6-338f-471a-ae8b-041038e92cfe","Type":"ContainerDied","Data":"3201f6fda2f93b9f3d06de559010fcee9e12a73e996551afcab2bd463470834d"} Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.780342 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3201f6fda2f93b9f3d06de559010fcee9e12a73e996551afcab2bd463470834d" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.779923 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.795855 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory\") pod \"af9e27b6-338f-471a-ae8b-041038e92cfe\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.796065 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzkb6\" (UniqueName: \"kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6\") pod \"af9e27b6-338f-471a-ae8b-041038e92cfe\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.796131 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle\") pod \"af9e27b6-338f-471a-ae8b-041038e92cfe\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.796272 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam\") pod \"af9e27b6-338f-471a-ae8b-041038e92cfe\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.796422 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0\") pod \"af9e27b6-338f-471a-ae8b-041038e92cfe\" (UID: \"af9e27b6-338f-471a-ae8b-041038e92cfe\") " Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.804554 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6" (OuterVolumeSpecName: "kube-api-access-fzkb6") pod "af9e27b6-338f-471a-ae8b-041038e92cfe" (UID: "af9e27b6-338f-471a-ae8b-041038e92cfe"). InnerVolumeSpecName "kube-api-access-fzkb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.813684 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "af9e27b6-338f-471a-ae8b-041038e92cfe" (UID: "af9e27b6-338f-471a-ae8b-041038e92cfe"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.848628 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory" (OuterVolumeSpecName: "inventory") pod "af9e27b6-338f-471a-ae8b-041038e92cfe" (UID: "af9e27b6-338f-471a-ae8b-041038e92cfe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.861099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "af9e27b6-338f-471a-ae8b-041038e92cfe" (UID: "af9e27b6-338f-471a-ae8b-041038e92cfe"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.882610 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af9e27b6-338f-471a-ae8b-041038e92cfe" (UID: "af9e27b6-338f-471a-ae8b-041038e92cfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.899822 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.899865 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.899879 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzkb6\" (UniqueName: \"kubernetes.io/projected/af9e27b6-338f-471a-ae8b-041038e92cfe-kube-api-access-fzkb6\") on node \"crc\" DevicePath \"\"" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.899894 4771 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.899909 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af9e27b6-338f-471a-ae8b-041038e92cfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.943375 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj"] Jan 23 14:13:57 crc kubenswrapper[4771]: E0123 14:13:57.944125 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="extract-utilities" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944143 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="extract-utilities" Jan 23 14:13:57 crc kubenswrapper[4771]: E0123 14:13:57.944166 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="extract-content" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944172 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="extract-content" Jan 23 14:13:57 crc kubenswrapper[4771]: E0123 14:13:57.944191 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af9e27b6-338f-471a-ae8b-041038e92cfe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944200 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="af9e27b6-338f-471a-ae8b-041038e92cfe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 14:13:57 crc kubenswrapper[4771]: E0123 14:13:57.944213 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="registry-server" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944218 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="registry-server" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944491 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="af9e27b6-338f-471a-ae8b-041038e92cfe" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.944525 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2d682b-ea78-4274-a2ae-4dcab639014f" containerName="registry-server" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.945375 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.948772 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.948983 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.949460 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 14:13:57 crc kubenswrapper[4771]: I0123 14:13:57.978712 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj"] Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002280 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002363 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002386 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002429 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002452 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cnr\" (UniqueName: \"kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.002534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.003094 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.003241 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.003479 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105509 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105629 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105690 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105721 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105747 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105773 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105799 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cnr\" (UniqueName: \"kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105842 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.105946 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.108385 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.112269 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.113251 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.114018 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.114541 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.119109 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.120699 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.120744 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.126879 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cnr\" (UniqueName: \"kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s6mfj\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.285175 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:13:58 crc kubenswrapper[4771]: I0123 14:13:58.885976 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj"] Jan 23 14:13:59 crc kubenswrapper[4771]: I0123 14:13:59.818944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" event={"ID":"922839ae-8351-47a1-8478-bd565744b023","Type":"ContainerStarted","Data":"f12c7239f23d338ad033b1f65d934d42d75ab49c0cc02843300c6dc10436f425"} Jan 23 14:13:59 crc kubenswrapper[4771]: I0123 14:13:59.857019 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" podStartSLOduration=2.2829967890000002 podStartE2EDuration="2.856995142s" podCreationTimestamp="2026-01-23 14:13:57 +0000 UTC" firstStartedPulling="2026-01-23 14:13:58.89660965 +0000 UTC m=+2479.919147275" lastFinishedPulling="2026-01-23 14:13:59.470608003 +0000 UTC m=+2480.493145628" observedRunningTime="2026-01-23 14:13:59.845803838 +0000 UTC m=+2480.868341483" watchObservedRunningTime="2026-01-23 14:13:59.856995142 +0000 UTC m=+2480.879532777" Jan 23 14:14:00 crc kubenswrapper[4771]: I0123 14:14:00.828556 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" event={"ID":"922839ae-8351-47a1-8478-bd565744b023","Type":"ContainerStarted","Data":"35cfa7a1c0c79e6d776cb3b2cc3b1c56f4cf3c24348f4582bf4ccb12052d778b"} Jan 23 14:14:05 crc kubenswrapper[4771]: I0123 14:14:05.228741 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:14:05 crc kubenswrapper[4771]: E0123 14:14:05.230837 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:14:18 crc kubenswrapper[4771]: I0123 14:14:18.228880 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:14:18 crc kubenswrapper[4771]: E0123 14:14:18.229779 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:14:33 crc kubenswrapper[4771]: I0123 14:14:33.228478 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:14:34 crc kubenswrapper[4771]: I0123 14:14:34.200706 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08"} Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.156153 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x"] Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.159328 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.163660 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.164233 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.173207 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x"] Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.331328 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sghq\" (UniqueName: \"kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.332759 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.332995 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.435805 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sghq\" (UniqueName: \"kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.435990 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.436104 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.437683 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.450701 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.468014 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sghq\" (UniqueName: \"kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq\") pod \"collect-profiles-29486295-k696x\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:00 crc kubenswrapper[4771]: I0123 14:15:00.487142 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:01 crc kubenswrapper[4771]: I0123 14:15:01.007112 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x"] Jan 23 14:15:01 crc kubenswrapper[4771]: I0123 14:15:01.481287 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" event={"ID":"5fb9bebd-ad35-4557-a357-ce8f389131fc","Type":"ContainerStarted","Data":"d22af3278c3b1636c8ff7bd43798a40c3bfbfd981066ff611bd19afe1093f1b6"} Jan 23 14:15:01 crc kubenswrapper[4771]: I0123 14:15:01.481662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" event={"ID":"5fb9bebd-ad35-4557-a357-ce8f389131fc","Type":"ContainerStarted","Data":"bc6589292baa2007de3113b58e35f45d0de9e8614b20b00a42e68bb0f1113589"} Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.494213 4771 generic.go:334] "Generic (PLEG): container finished" podID="5fb9bebd-ad35-4557-a357-ce8f389131fc" containerID="d22af3278c3b1636c8ff7bd43798a40c3bfbfd981066ff611bd19afe1093f1b6" exitCode=0 Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.494383 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" event={"ID":"5fb9bebd-ad35-4557-a357-ce8f389131fc","Type":"ContainerDied","Data":"d22af3278c3b1636c8ff7bd43798a40c3bfbfd981066ff611bd19afe1093f1b6"} Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.883351 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.912024 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sghq\" (UniqueName: \"kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq\") pod \"5fb9bebd-ad35-4557-a357-ce8f389131fc\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.912156 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume\") pod \"5fb9bebd-ad35-4557-a357-ce8f389131fc\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.912279 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume\") pod \"5fb9bebd-ad35-4557-a357-ce8f389131fc\" (UID: \"5fb9bebd-ad35-4557-a357-ce8f389131fc\") " Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.913229 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume" (OuterVolumeSpecName: "config-volume") pod "5fb9bebd-ad35-4557-a357-ce8f389131fc" (UID: "5fb9bebd-ad35-4557-a357-ce8f389131fc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.926662 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5fb9bebd-ad35-4557-a357-ce8f389131fc" (UID: "5fb9bebd-ad35-4557-a357-ce8f389131fc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:15:02 crc kubenswrapper[4771]: I0123 14:15:02.928106 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq" (OuterVolumeSpecName: "kube-api-access-5sghq") pod "5fb9bebd-ad35-4557-a357-ce8f389131fc" (UID: "5fb9bebd-ad35-4557-a357-ce8f389131fc"). InnerVolumeSpecName "kube-api-access-5sghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.015253 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sghq\" (UniqueName: \"kubernetes.io/projected/5fb9bebd-ad35-4557-a357-ce8f389131fc-kube-api-access-5sghq\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.015314 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fb9bebd-ad35-4557-a357-ce8f389131fc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.015329 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fb9bebd-ad35-4557-a357-ce8f389131fc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.507211 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" event={"ID":"5fb9bebd-ad35-4557-a357-ce8f389131fc","Type":"ContainerDied","Data":"bc6589292baa2007de3113b58e35f45d0de9e8614b20b00a42e68bb0f1113589"} Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.507278 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6589292baa2007de3113b58e35f45d0de9e8614b20b00a42e68bb0f1113589" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.507341 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x" Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.967452 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd"] Jan 23 14:15:03 crc kubenswrapper[4771]: I0123 14:15:03.977726 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486250-89rvd"] Jan 23 14:15:05 crc kubenswrapper[4771]: I0123 14:15:05.243085 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3434d12e-d777-4664-a29a-1d2598306b09" path="/var/lib/kubelet/pods/3434d12e-d777-4664-a29a-1d2598306b09/volumes" Jan 23 14:15:50 crc kubenswrapper[4771]: I0123 14:15:50.825127 4771 scope.go:117] "RemoveContainer" containerID="f98063633c73baecbcacef77b0c1a7e98317ae3aed02c284ac703573c5dedf91" Jan 23 14:16:44 crc kubenswrapper[4771]: I0123 14:16:44.667026 4771 generic.go:334] "Generic (PLEG): container finished" podID="922839ae-8351-47a1-8478-bd565744b023" containerID="35cfa7a1c0c79e6d776cb3b2cc3b1c56f4cf3c24348f4582bf4ccb12052d778b" exitCode=0 Jan 23 14:16:44 crc kubenswrapper[4771]: I0123 14:16:44.668022 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" event={"ID":"922839ae-8351-47a1-8478-bd565744b023","Type":"ContainerDied","Data":"35cfa7a1c0c79e6d776cb3b2cc3b1c56f4cf3c24348f4582bf4ccb12052d778b"} Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.162460 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.260773 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261162 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261191 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261248 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261341 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cnr\" (UniqueName: \"kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261379 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261534 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261605 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.261637 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1\") pod \"922839ae-8351-47a1-8478-bd565744b023\" (UID: \"922839ae-8351-47a1-8478-bd565744b023\") " Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.269084 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr" (OuterVolumeSpecName: "kube-api-access-g6cnr") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "kube-api-access-g6cnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.283679 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.295158 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.295192 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.300099 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.302636 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.306812 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.316906 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory" (OuterVolumeSpecName: "inventory") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.322724 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "922839ae-8351-47a1-8478-bd565744b023" (UID: "922839ae-8351-47a1-8478-bd565744b023"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365183 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365227 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365238 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365247 4771 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365256 4771 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365264 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365274 4771 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/922839ae-8351-47a1-8478-bd565744b023-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365285 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cnr\" (UniqueName: \"kubernetes.io/projected/922839ae-8351-47a1-8478-bd565744b023-kube-api-access-g6cnr\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.365296 4771 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/922839ae-8351-47a1-8478-bd565744b023-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.693005 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" event={"ID":"922839ae-8351-47a1-8478-bd565744b023","Type":"ContainerDied","Data":"f12c7239f23d338ad033b1f65d934d42d75ab49c0cc02843300c6dc10436f425"} Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.693065 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f12c7239f23d338ad033b1f65d934d42d75ab49c0cc02843300c6dc10436f425" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.693127 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s6mfj" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.816800 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8"] Jan 23 14:16:46 crc kubenswrapper[4771]: E0123 14:16:46.817527 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb9bebd-ad35-4557-a357-ce8f389131fc" containerName="collect-profiles" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.817562 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb9bebd-ad35-4557-a357-ce8f389131fc" containerName="collect-profiles" Jan 23 14:16:46 crc kubenswrapper[4771]: E0123 14:16:46.817637 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="922839ae-8351-47a1-8478-bd565744b023" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.817648 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="922839ae-8351-47a1-8478-bd565744b023" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.817889 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb9bebd-ad35-4557-a357-ce8f389131fc" containerName="collect-profiles" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.817921 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="922839ae-8351-47a1-8478-bd565744b023" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.818975 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.821823 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.822190 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vlbh7" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.822202 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.822455 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.822825 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.830923 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8"] Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979363 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979500 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52m2\" (UniqueName: \"kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979583 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979652 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979685 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979717 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:46 crc kubenswrapper[4771]: I0123 14:16:46.979820 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.083818 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.083903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.083939 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.084244 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.084595 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.084667 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52m2\" (UniqueName: \"kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.084850 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.091108 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.093460 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.094218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.094515 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.094597 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.096169 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.107618 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52m2\" (UniqueName: \"kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-czrj8\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.143010 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.746634 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8"] Jan 23 14:16:47 crc kubenswrapper[4771]: I0123 14:16:47.761678 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:16:48 crc kubenswrapper[4771]: I0123 14:16:48.749752 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" event={"ID":"af6c0a2c-2354-4db8-9468-951607428157","Type":"ContainerStarted","Data":"f25dd5210b655cbbca448c117e53e950a89e638a1e3c45696d8c7e49612a0db5"} Jan 23 14:16:48 crc kubenswrapper[4771]: I0123 14:16:48.749810 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" event={"ID":"af6c0a2c-2354-4db8-9468-951607428157","Type":"ContainerStarted","Data":"e6dfca00ea28fbacb8091a92673c8ed0b7ecd01fc3d02d72c19445c8eb3915ba"} Jan 23 14:16:48 crc kubenswrapper[4771]: I0123 14:16:48.782402 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" podStartSLOduration=2.300936052 podStartE2EDuration="2.782375903s" podCreationTimestamp="2026-01-23 14:16:46 +0000 UTC" firstStartedPulling="2026-01-23 14:16:47.758648013 +0000 UTC m=+2648.781185638" lastFinishedPulling="2026-01-23 14:16:48.240087864 +0000 UTC m=+2649.262625489" observedRunningTime="2026-01-23 14:16:48.776838717 +0000 UTC m=+2649.799376362" watchObservedRunningTime="2026-01-23 14:16:48.782375903 +0000 UTC m=+2649.804913528" Jan 23 14:17:00 crc kubenswrapper[4771]: I0123 14:17:00.312331 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:17:00 crc kubenswrapper[4771]: I0123 14:17:00.313188 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:17:30 crc kubenswrapper[4771]: I0123 14:17:30.311960 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:17:30 crc kubenswrapper[4771]: I0123 14:17:30.313542 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:17:50 crc kubenswrapper[4771]: I0123 14:17:50.939981 4771 scope.go:117] "RemoveContainer" containerID="5140c5bad41350c1c1bd79bd05e41f3cb3223ff16acc27e792c3b974f249926d" Jan 23 14:17:50 crc kubenswrapper[4771]: I0123 14:17:50.968396 4771 scope.go:117] "RemoveContainer" containerID="6d1bd4825f225f0a7d8f0748dc2ab93966b58985faa39eff3b951c39e4ecd113" Jan 23 14:17:51 crc kubenswrapper[4771]: I0123 14:17:51.020224 4771 scope.go:117] "RemoveContainer" containerID="b9b062ec7dbc8771e1a2584718493164ec97d84f981d63f7eed93f42a3053ddb" Jan 23 14:18:00 crc kubenswrapper[4771]: I0123 14:18:00.311647 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:18:00 crc kubenswrapper[4771]: I0123 14:18:00.312429 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:18:00 crc kubenswrapper[4771]: I0123 14:18:00.312480 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:18:00 crc kubenswrapper[4771]: I0123 14:18:00.313380 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:18:00 crc kubenswrapper[4771]: I0123 14:18:00.313458 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08" gracePeriod=600 Jan 23 14:18:01 crc kubenswrapper[4771]: I0123 14:18:01.341992 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08" exitCode=0 Jan 23 14:18:01 crc kubenswrapper[4771]: I0123 14:18:01.342097 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08"} Jan 23 14:18:01 crc kubenswrapper[4771]: I0123 14:18:01.342899 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725"} Jan 23 14:18:01 crc kubenswrapper[4771]: I0123 14:18:01.342938 4771 scope.go:117] "RemoveContainer" containerID="9c942bc3ff009b27ddcd4c9ae171412f5c98879446d60c14b6ec458a89ba680a" Jan 23 14:19:05 crc kubenswrapper[4771]: I0123 14:19:05.018320 4771 generic.go:334] "Generic (PLEG): container finished" podID="af6c0a2c-2354-4db8-9468-951607428157" containerID="f25dd5210b655cbbca448c117e53e950a89e638a1e3c45696d8c7e49612a0db5" exitCode=0 Jan 23 14:19:05 crc kubenswrapper[4771]: I0123 14:19:05.018406 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" event={"ID":"af6c0a2c-2354-4db8-9468-951607428157","Type":"ContainerDied","Data":"f25dd5210b655cbbca448c117e53e950a89e638a1e3c45696d8c7e49612a0db5"} Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.597764 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.749082 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.749426 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.749620 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.749785 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.749969 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.750123 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52m2\" (UniqueName: \"kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.750382 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0\") pod \"af6c0a2c-2354-4db8-9468-951607428157\" (UID: \"af6c0a2c-2354-4db8-9468-951607428157\") " Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.756601 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2" (OuterVolumeSpecName: "kube-api-access-f52m2") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "kube-api-access-f52m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.757198 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.785602 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.786016 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.788564 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.789149 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory" (OuterVolumeSpecName: "inventory") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.791583 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "af6c0a2c-2354-4db8-9468-951607428157" (UID: "af6c0a2c-2354-4db8-9468-951607428157"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855853 4771 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855898 4771 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855909 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52m2\" (UniqueName: \"kubernetes.io/projected/af6c0a2c-2354-4db8-9468-951607428157-kube-api-access-f52m2\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855918 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855930 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855941 4771 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:06 crc kubenswrapper[4771]: I0123 14:19:06.855952 4771 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/af6c0a2c-2354-4db8-9468-951607428157-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:07 crc kubenswrapper[4771]: I0123 14:19:07.048457 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" event={"ID":"af6c0a2c-2354-4db8-9468-951607428157","Type":"ContainerDied","Data":"e6dfca00ea28fbacb8091a92673c8ed0b7ecd01fc3d02d72c19445c8eb3915ba"} Jan 23 14:19:07 crc kubenswrapper[4771]: I0123 14:19:07.048942 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6dfca00ea28fbacb8091a92673c8ed0b7ecd01fc3d02d72c19445c8eb3915ba" Jan 23 14:19:07 crc kubenswrapper[4771]: I0123 14:19:07.049014 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-czrj8" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.032775 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: E0123 14:19:44.034216 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6c0a2c-2354-4db8-9468-951607428157" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.034237 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6c0a2c-2354-4db8-9468-951607428157" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.034561 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6c0a2c-2354-4db8-9468-951607428157" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.036151 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.038602 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.074215 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089473 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rc7\" (UniqueName: \"kubernetes.io/projected/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-kube-api-access-z2rc7\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089531 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089570 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-scripts\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089593 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089681 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089742 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089802 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089875 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-run\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089898 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-sys\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.089991 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-dev\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.090058 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.090187 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.090230 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-lib-modules\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192752 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192870 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-run\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192893 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-sys\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192910 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-dev\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192930 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.192981 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-lib-modules\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193019 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-sys\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193032 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-run\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193085 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-lib-modules\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193111 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rc7\" (UniqueName: \"kubernetes.io/projected/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-kube-api-access-z2rc7\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193139 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193154 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193194 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-scripts\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193211 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193246 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193273 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193291 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193420 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-dev\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193482 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193563 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193578 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.193728 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-etc-nvme\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.205494 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.205767 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.210022 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-scripts\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.228309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-config-data-custom\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.232098 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rc7\" (UniqueName: \"kubernetes.io/projected/9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b-kube-api-access-z2rc7\") pod \"cinder-backup-0\" (UID: \"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b\") " pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.238595 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.241705 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.247062 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.261204 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.312895 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.312968 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.313080 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.313308 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.313345 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.313398 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330100 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330240 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330274 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-run\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330358 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330394 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330481 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.330534 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.331349 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.331552 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgzh\" (UniqueName: \"kubernetes.io/projected/3f3184c8-6eb4-417e-aaab-707a459e8d6e-kube-api-access-4vgzh\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.367100 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.370844 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.375309 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.376397 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.484355 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491483 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491657 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491709 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491867 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgzh\" (UniqueName: \"kubernetes.io/projected/3f3184c8-6eb4-417e-aaab-707a459e8d6e-kube-api-access-4vgzh\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.491981 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492045 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492075 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492098 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492186 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492234 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492259 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492298 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492321 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492380 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492402 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492653 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492677 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492730 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492746 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492783 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492818 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-run\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492837 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.492976 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493007 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkhbf\" (UniqueName: \"kubernetes.io/projected/b9fc32dc-49be-4722-9430-260c9ae2da80-kube-api-access-bkhbf\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493072 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493246 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493303 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.493807 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.494318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-run\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.499557 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.499635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.500461 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-sys\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.500499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-dev\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.500564 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.500619 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3f3184c8-6eb4-417e-aaab-707a459e8d6e-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.511377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.517186 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.522725 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.529082 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f3184c8-6eb4-417e-aaab-707a459e8d6e-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.543078 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.543924 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgzh\" (UniqueName: \"kubernetes.io/projected/3f3184c8-6eb4-417e-aaab-707a459e8d6e-kube-api-access-4vgzh\") pod \"cinder-volume-nfs-0\" (UID: \"3f3184c8-6eb4-417e-aaab-707a459e8d6e\") " pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.595298 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.596913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.597778 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.597897 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.598826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkhbf\" (UniqueName: \"kubernetes.io/projected/b9fc32dc-49be-4722-9430-260c9ae2da80-kube-api-access-bkhbf\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.598874 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.598934 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.595483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.599377 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.599080 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.600371 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.600635 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.601340 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.602322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.602579 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.602852 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.603046 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.603139 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.603542 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.603675 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.603910 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.602394 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.606127 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.606289 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.606434 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.606572 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b9fc32dc-49be-4722-9430-260c9ae2da80-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.613355 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.615198 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.617983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9fc32dc-49be-4722-9430-260c9ae2da80-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.637592 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkhbf\" (UniqueName: \"kubernetes.io/projected/b9fc32dc-49be-4722-9430-260c9ae2da80-kube-api-access-bkhbf\") pod \"cinder-volume-nfs-2-0\" (UID: \"b9fc32dc-49be-4722-9430-260c9ae2da80\") " pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.677375 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:44 crc kubenswrapper[4771]: I0123 14:19:44.783218 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:45 crc kubenswrapper[4771]: I0123 14:19:45.270913 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 23 14:19:45 crc kubenswrapper[4771]: I0123 14:19:45.490687 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 23 14:19:45 crc kubenswrapper[4771]: I0123 14:19:45.563526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"3f3184c8-6eb4-417e-aaab-707a459e8d6e","Type":"ContainerStarted","Data":"6038d125ee354c6e2334d974fe89a374ef28bc98019bed584f87c84322c6f3a2"} Jan 23 14:19:45 crc kubenswrapper[4771]: I0123 14:19:45.566303 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b","Type":"ContainerStarted","Data":"88b6da2e631644bfa91a7d53e3ed3d8ddcacbc3fb8ec846db27884f60906c22b"} Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.214127 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 23 14:19:46 crc kubenswrapper[4771]: W0123 14:19:46.241456 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9fc32dc_49be_4722_9430_260c9ae2da80.slice/crio-16df19bbd5860bdd0290c94855ec4b90e6f26aa61fa46bb039a5312be37715c3 WatchSource:0}: Error finding container 16df19bbd5860bdd0290c94855ec4b90e6f26aa61fa46bb039a5312be37715c3: Status 404 returned error can't find the container with id 16df19bbd5860bdd0290c94855ec4b90e6f26aa61fa46bb039a5312be37715c3 Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.588136 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"3f3184c8-6eb4-417e-aaab-707a459e8d6e","Type":"ContainerStarted","Data":"d54c650b04491fe1dcbf55f9d97647d0a3c2896753bfc9af232a52d3f5fe8aee"} Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.595840 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"b9fc32dc-49be-4722-9430-260c9ae2da80","Type":"ContainerStarted","Data":"16df19bbd5860bdd0290c94855ec4b90e6f26aa61fa46bb039a5312be37715c3"} Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.599535 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b","Type":"ContainerStarted","Data":"29604f62f07da06194ba9b4822a06bff93fd658706e5bfe7e143ce9868d10d89"} Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.599891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b","Type":"ContainerStarted","Data":"e59af63939864875abf9bc735f574d5d96e340aeae3b24d1181b1ce9fc70813b"} Jan 23 14:19:46 crc kubenswrapper[4771]: I0123 14:19:46.634846 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.413109829 podStartE2EDuration="3.634810418s" podCreationTimestamp="2026-01-23 14:19:43 +0000 UTC" firstStartedPulling="2026-01-23 14:19:45.271884422 +0000 UTC m=+2826.294422047" lastFinishedPulling="2026-01-23 14:19:45.493585011 +0000 UTC m=+2826.516122636" observedRunningTime="2026-01-23 14:19:46.627371443 +0000 UTC m=+2827.649909068" watchObservedRunningTime="2026-01-23 14:19:46.634810418 +0000 UTC m=+2827.657348053" Jan 23 14:19:47 crc kubenswrapper[4771]: I0123 14:19:47.637582 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"3f3184c8-6eb4-417e-aaab-707a459e8d6e","Type":"ContainerStarted","Data":"3314ff4f42898d38afdc525fe910e266484ffea55c752803403ed8005bf09fad"} Jan 23 14:19:47 crc kubenswrapper[4771]: I0123 14:19:47.659867 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"b9fc32dc-49be-4722-9430-260c9ae2da80","Type":"ContainerStarted","Data":"3e8bbac307dffb72034d046c313323219d70954ab88bd835ab1165c49d5f36ff"} Jan 23 14:19:47 crc kubenswrapper[4771]: I0123 14:19:47.659944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"b9fc32dc-49be-4722-9430-260c9ae2da80","Type":"ContainerStarted","Data":"bffa20a77c2d9cf091becf535cef0c4d6b97ed4041979df07635fa5ca6a4f8a5"} Jan 23 14:19:47 crc kubenswrapper[4771]: I0123 14:19:47.712162 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.360349113 podStartE2EDuration="3.712127105s" podCreationTimestamp="2026-01-23 14:19:44 +0000 UTC" firstStartedPulling="2026-01-23 14:19:45.499999513 +0000 UTC m=+2826.522537138" lastFinishedPulling="2026-01-23 14:19:45.851777505 +0000 UTC m=+2826.874315130" observedRunningTime="2026-01-23 14:19:47.680930589 +0000 UTC m=+2828.703468204" watchObservedRunningTime="2026-01-23 14:19:47.712127105 +0000 UTC m=+2828.734664740" Jan 23 14:19:47 crc kubenswrapper[4771]: I0123 14:19:47.738468 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.738431117 podStartE2EDuration="3.738431117s" podCreationTimestamp="2026-01-23 14:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:19:47.734234084 +0000 UTC m=+2828.756771709" watchObservedRunningTime="2026-01-23 14:19:47.738431117 +0000 UTC m=+2828.760968752" Jan 23 14:19:49 crc kubenswrapper[4771]: I0123 14:19:49.379055 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 23 14:19:49 crc kubenswrapper[4771]: I0123 14:19:49.678304 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:49 crc kubenswrapper[4771]: I0123 14:19:49.784968 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:19:54 crc kubenswrapper[4771]: I0123 14:19:54.592538 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 23 14:19:54 crc kubenswrapper[4771]: I0123 14:19:54.935397 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 23 14:19:55 crc kubenswrapper[4771]: I0123 14:19:55.178543 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 23 14:20:00 crc kubenswrapper[4771]: I0123 14:20:00.313922 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:20:00 crc kubenswrapper[4771]: I0123 14:20:00.314895 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.519451 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.523344 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.539145 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.557447 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7z4\" (UniqueName: \"kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.557569 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.557650 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.660314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.660574 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw7z4\" (UniqueName: \"kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.660655 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.661583 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.661832 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.685665 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw7z4\" (UniqueName: \"kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4\") pod \"community-operators-x2r27\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:29 crc kubenswrapper[4771]: I0123 14:20:29.852626 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:30 crc kubenswrapper[4771]: I0123 14:20:30.312677 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:20:30 crc kubenswrapper[4771]: I0123 14:20:30.313190 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:20:30 crc kubenswrapper[4771]: I0123 14:20:30.640689 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:31 crc kubenswrapper[4771]: I0123 14:20:31.314557 4771 generic.go:334] "Generic (PLEG): container finished" podID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerID="ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76" exitCode=0 Jan 23 14:20:31 crc kubenswrapper[4771]: I0123 14:20:31.314620 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerDied","Data":"ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76"} Jan 23 14:20:31 crc kubenswrapper[4771]: I0123 14:20:31.314906 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerStarted","Data":"eb9b0809e343bd24e5e6793fa375f4a56171ddb6f809de9574e9eccbdefe0a7c"} Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.089044 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.092612 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.145946 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.146663 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.146731 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.146873 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk2l5\" (UniqueName: \"kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.249449 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.249499 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.249612 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk2l5\" (UniqueName: \"kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.250402 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.250759 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.273392 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk2l5\" (UniqueName: \"kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5\") pod \"redhat-operators-xvtph\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:32 crc kubenswrapper[4771]: I0123 14:20:32.413882 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:33 crc kubenswrapper[4771]: I0123 14:20:33.015337 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:33 crc kubenswrapper[4771]: W0123 14:20:33.017503 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b7b64e_4e78_4fb0_8315_75fee67aebce.slice/crio-06ddf5fddec4b94295673bea08060fb6fd7e3dfe243a5adb6dceba6ed34dcf8c WatchSource:0}: Error finding container 06ddf5fddec4b94295673bea08060fb6fd7e3dfe243a5adb6dceba6ed34dcf8c: Status 404 returned error can't find the container with id 06ddf5fddec4b94295673bea08060fb6fd7e3dfe243a5adb6dceba6ed34dcf8c Jan 23 14:20:33 crc kubenswrapper[4771]: I0123 14:20:33.406538 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerStarted","Data":"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7"} Jan 23 14:20:33 crc kubenswrapper[4771]: I0123 14:20:33.425870 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerStarted","Data":"06ddf5fddec4b94295673bea08060fb6fd7e3dfe243a5adb6dceba6ed34dcf8c"} Jan 23 14:20:34 crc kubenswrapper[4771]: I0123 14:20:34.443578 4771 generic.go:334] "Generic (PLEG): container finished" podID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerID="58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7" exitCode=0 Jan 23 14:20:34 crc kubenswrapper[4771]: I0123 14:20:34.443670 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerDied","Data":"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7"} Jan 23 14:20:34 crc kubenswrapper[4771]: I0123 14:20:34.448934 4771 generic.go:334] "Generic (PLEG): container finished" podID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerID="87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8" exitCode=0 Jan 23 14:20:34 crc kubenswrapper[4771]: I0123 14:20:34.448996 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerDied","Data":"87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8"} Jan 23 14:20:35 crc kubenswrapper[4771]: I0123 14:20:35.461845 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerStarted","Data":"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7"} Jan 23 14:20:35 crc kubenswrapper[4771]: I0123 14:20:35.465399 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerStarted","Data":"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0"} Jan 23 14:20:35 crc kubenswrapper[4771]: I0123 14:20:35.511792 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2r27" podStartSLOduration=2.712347546 podStartE2EDuration="6.511766457s" podCreationTimestamp="2026-01-23 14:20:29 +0000 UTC" firstStartedPulling="2026-01-23 14:20:31.317119742 +0000 UTC m=+2872.339657367" lastFinishedPulling="2026-01-23 14:20:35.116538653 +0000 UTC m=+2876.139076278" observedRunningTime="2026-01-23 14:20:35.50588342 +0000 UTC m=+2876.528421065" watchObservedRunningTime="2026-01-23 14:20:35.511766457 +0000 UTC m=+2876.534304082" Jan 23 14:20:39 crc kubenswrapper[4771]: E0123 14:20:39.178297 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b7b64e_4e78_4fb0_8315_75fee67aebce.slice/crio-conmon-27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:20:39 crc kubenswrapper[4771]: I0123 14:20:39.508023 4771 generic.go:334] "Generic (PLEG): container finished" podID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerID="27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7" exitCode=0 Jan 23 14:20:39 crc kubenswrapper[4771]: I0123 14:20:39.508115 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerDied","Data":"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7"} Jan 23 14:20:39 crc kubenswrapper[4771]: I0123 14:20:39.853115 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:39 crc kubenswrapper[4771]: I0123 14:20:39.853204 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:39 crc kubenswrapper[4771]: I0123 14:20:39.916186 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:40 crc kubenswrapper[4771]: I0123 14:20:40.589004 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:41 crc kubenswrapper[4771]: I0123 14:20:41.083651 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:41 crc kubenswrapper[4771]: I0123 14:20:41.541603 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerStarted","Data":"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde"} Jan 23 14:20:41 crc kubenswrapper[4771]: I0123 14:20:41.577346 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvtph" podStartSLOduration=3.26257861 podStartE2EDuration="9.577316097s" podCreationTimestamp="2026-01-23 14:20:32 +0000 UTC" firstStartedPulling="2026-01-23 14:20:34.454727011 +0000 UTC m=+2875.477264626" lastFinishedPulling="2026-01-23 14:20:40.769464478 +0000 UTC m=+2881.792002113" observedRunningTime="2026-01-23 14:20:41.568590591 +0000 UTC m=+2882.591128216" watchObservedRunningTime="2026-01-23 14:20:41.577316097 +0000 UTC m=+2882.599853722" Jan 23 14:20:42 crc kubenswrapper[4771]: I0123 14:20:42.448640 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:42 crc kubenswrapper[4771]: I0123 14:20:42.449308 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:42 crc kubenswrapper[4771]: I0123 14:20:42.553996 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2r27" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="registry-server" containerID="cri-o://5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0" gracePeriod=2 Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.094706 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.169518 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content\") pod \"5ae212a8-fb5c-4ee9-820b-928688821cb0\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.169596 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw7z4\" (UniqueName: \"kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4\") pod \"5ae212a8-fb5c-4ee9-820b-928688821cb0\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.169821 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities\") pod \"5ae212a8-fb5c-4ee9-820b-928688821cb0\" (UID: \"5ae212a8-fb5c-4ee9-820b-928688821cb0\") " Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.170557 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities" (OuterVolumeSpecName: "utilities") pod "5ae212a8-fb5c-4ee9-820b-928688821cb0" (UID: "5ae212a8-fb5c-4ee9-820b-928688821cb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.178950 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4" (OuterVolumeSpecName: "kube-api-access-gw7z4") pod "5ae212a8-fb5c-4ee9-820b-928688821cb0" (UID: "5ae212a8-fb5c-4ee9-820b-928688821cb0"). InnerVolumeSpecName "kube-api-access-gw7z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.227875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ae212a8-fb5c-4ee9-820b-928688821cb0" (UID: "5ae212a8-fb5c-4ee9-820b-928688821cb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.272944 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.272992 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae212a8-fb5c-4ee9-820b-928688821cb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.273007 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw7z4\" (UniqueName: \"kubernetes.io/projected/5ae212a8-fb5c-4ee9-820b-928688821cb0-kube-api-access-gw7z4\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.502422 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvtph" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="registry-server" probeResult="failure" output=< Jan 23 14:20:43 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:20:43 crc kubenswrapper[4771]: > Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.568301 4771 generic.go:334] "Generic (PLEG): container finished" podID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerID="5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0" exitCode=0 Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.568371 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerDied","Data":"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0"} Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.568439 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2r27" event={"ID":"5ae212a8-fb5c-4ee9-820b-928688821cb0","Type":"ContainerDied","Data":"eb9b0809e343bd24e5e6793fa375f4a56171ddb6f809de9574e9eccbdefe0a7c"} Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.568470 4771 scope.go:117] "RemoveContainer" containerID="5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.568689 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2r27" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.607313 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.614167 4771 scope.go:117] "RemoveContainer" containerID="58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.623465 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2r27"] Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.655726 4771 scope.go:117] "RemoveContainer" containerID="ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.722306 4771 scope.go:117] "RemoveContainer" containerID="5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0" Jan 23 14:20:43 crc kubenswrapper[4771]: E0123 14:20:43.723665 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0\": container with ID starting with 5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0 not found: ID does not exist" containerID="5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.723718 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0"} err="failed to get container status \"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0\": rpc error: code = NotFound desc = could not find container \"5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0\": container with ID starting with 5bc30bb935f4138a530657a30ba72d3c53394df466e6586c00c0534c0b9773a0 not found: ID does not exist" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.723825 4771 scope.go:117] "RemoveContainer" containerID="58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7" Jan 23 14:20:43 crc kubenswrapper[4771]: E0123 14:20:43.724303 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7\": container with ID starting with 58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7 not found: ID does not exist" containerID="58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.724362 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7"} err="failed to get container status \"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7\": rpc error: code = NotFound desc = could not find container \"58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7\": container with ID starting with 58a22f30f38981873514b0860fa2459268a390acdf75e0a788fceb523df088b7 not found: ID does not exist" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.724402 4771 scope.go:117] "RemoveContainer" containerID="ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76" Jan 23 14:20:43 crc kubenswrapper[4771]: E0123 14:20:43.725065 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76\": container with ID starting with ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76 not found: ID does not exist" containerID="ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76" Jan 23 14:20:43 crc kubenswrapper[4771]: I0123 14:20:43.725093 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76"} err="failed to get container status \"ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76\": rpc error: code = NotFound desc = could not find container \"ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76\": container with ID starting with ee4c082fc64e4c2b94f14ef5cf406b954aaba3391915371a19f277dbd43d2d76 not found: ID does not exist" Jan 23 14:20:45 crc kubenswrapper[4771]: I0123 14:20:45.242109 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" path="/var/lib/kubelet/pods/5ae212a8-fb5c-4ee9-820b-928688821cb0/volumes" Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.028162 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.029927 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="prometheus" containerID="cri-o://9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5" gracePeriod=600 Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.030622 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="thanos-sidecar" containerID="cri-o://deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d" gracePeriod=600 Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.030685 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="config-reloader" containerID="cri-o://8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616" gracePeriod=600 Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.632438 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6c312ce-f6df-4617-ba37-6675897fa368" containerID="deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d" exitCode=0 Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.632475 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6c312ce-f6df-4617-ba37-6675897fa368" containerID="9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5" exitCode=0 Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.632516 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerDied","Data":"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d"} Jan 23 14:20:48 crc kubenswrapper[4771]: I0123 14:20:48.632651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerDied","Data":"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5"} Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.096949 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.234794 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.234901 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.234955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235001 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235060 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235102 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235131 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235288 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235308 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235372 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235552 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn8nx\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235625 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.235660 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config\") pod \"c6c312ce-f6df-4617-ba37-6675897fa368\" (UID: \"c6c312ce-f6df-4617-ba37-6675897fa368\") " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.238937 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.239319 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.240098 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.246141 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx" (OuterVolumeSpecName: "kube-api-access-mn8nx") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "kube-api-access-mn8nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.246829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.249227 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.249255 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.256994 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.257170 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config" (OuterVolumeSpecName: "config") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.260390 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.261007 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out" (OuterVolumeSpecName: "config-out") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.290774 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338334 4771 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338374 4771 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338385 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn8nx\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-kube-api-access-mn8nx\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338394 4771 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c6c312ce-f6df-4617-ba37-6675897fa368-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338403 4771 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338465 4771 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") on node \"crc\" " Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338476 4771 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338489 4771 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c6c312ce-f6df-4617-ba37-6675897fa368-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338499 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338511 4771 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338521 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.338532 4771 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c6c312ce-f6df-4617-ba37-6675897fa368-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.358161 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config" (OuterVolumeSpecName: "web-config") pod "c6c312ce-f6df-4617-ba37-6675897fa368" (UID: "c6c312ce-f6df-4617-ba37-6675897fa368"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.387897 4771 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.388123 4771 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310") on node "crc" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.440833 4771 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c6c312ce-f6df-4617-ba37-6675897fa368-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.440878 4771 reconciler_common.go:293] "Volume detached for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.646571 4771 generic.go:334] "Generic (PLEG): container finished" podID="c6c312ce-f6df-4617-ba37-6675897fa368" containerID="8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616" exitCode=0 Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.646626 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerDied","Data":"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616"} Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.646675 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c6c312ce-f6df-4617-ba37-6675897fa368","Type":"ContainerDied","Data":"1bd00ff525f7e65d435572146c06dac45466e8c832f5db5fe4c2c5b8e41c38af"} Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.646704 4771 scope.go:117] "RemoveContainer" containerID="deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.646738 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.675072 4771 scope.go:117] "RemoveContainer" containerID="8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.690809 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.704011 4771 scope.go:117] "RemoveContainer" containerID="9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.707834 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751032 4771 scope.go:117] "RemoveContainer" containerID="28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751146 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751743 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="config-reloader" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751765 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="config-reloader" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751790 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="extract-utilities" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751797 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="extract-utilities" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751819 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="thanos-sidecar" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751826 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="thanos-sidecar" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751833 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="extract-content" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751840 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="extract-content" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751848 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="prometheus" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751853 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="prometheus" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751866 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="init-config-reloader" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751871 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="init-config-reloader" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.751895 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="registry-server" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.751902 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="registry-server" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.752174 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="config-reloader" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.752214 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="thanos-sidecar" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.752230 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" containerName="prometheus" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.752245 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae212a8-fb5c-4ee9-820b-928688821cb0" containerName="registry-server" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.756767 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.764710 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qkchd" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.764978 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.765108 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.766250 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.768780 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.770265 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.770707 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.777422 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.799595 4771 scope.go:117] "RemoveContainer" containerID="deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.802884 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d\": container with ID starting with deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d not found: ID does not exist" containerID="deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.802935 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d"} err="failed to get container status \"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d\": rpc error: code = NotFound desc = could not find container \"deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d\": container with ID starting with deab5483337e60a33fc9bf44bdb05b7d11f6e6b8d1f16f6f8c994c912982f39d not found: ID does not exist" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.802968 4771 scope.go:117] "RemoveContainer" containerID="8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.803670 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616\": container with ID starting with 8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616 not found: ID does not exist" containerID="8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.803700 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616"} err="failed to get container status \"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616\": rpc error: code = NotFound desc = could not find container \"8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616\": container with ID starting with 8c1ff97f95e17f8bd7024f3b346231ac6dfa110244658f211a722005011bb616 not found: ID does not exist" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.803723 4771 scope.go:117] "RemoveContainer" containerID="9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.807542 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5\": container with ID starting with 9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5 not found: ID does not exist" containerID="9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.807579 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5"} err="failed to get container status \"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5\": rpc error: code = NotFound desc = could not find container \"9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5\": container with ID starting with 9d69dfd3e4b9e103528b0cb3a151b21c9877cbb2771e208b8ce22c7c9c46e9c5 not found: ID does not exist" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.807605 4771 scope.go:117] "RemoveContainer" containerID="28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0" Jan 23 14:20:49 crc kubenswrapper[4771]: E0123 14:20:49.809898 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0\": container with ID starting with 28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0 not found: ID does not exist" containerID="28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.809966 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0"} err="failed to get container status \"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0\": rpc error: code = NotFound desc = could not find container \"28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0\": container with ID starting with 28dd65805fcd4bf304df7851d4640357aabcddccfe0e2c4b1bb42aa8272989d0 not found: ID does not exist" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.813286 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850328 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850368 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850543 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850627 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850703 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850744 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.850877 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh5p7\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-kube-api-access-sh5p7\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.851048 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.851132 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0542ded-3d93-4a78-a31b-f25fce8407e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.851215 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.851272 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.851381 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953267 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0542ded-3d93-4a78-a31b-f25fce8407e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953737 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953767 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953826 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953888 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953917 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953942 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.953986 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.954012 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.954037 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.954063 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.954082 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh5p7\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-kube-api-access-sh5p7\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.954153 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.955559 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.955958 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.956431 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/b0542ded-3d93-4a78-a31b-f25fce8407e6-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.962115 4771 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.962162 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fedd087c020fedaa53662fb68cb2c644ee54851c0d7a037bd330262bcce6f5b4/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.966589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.967580 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.977237 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.979816 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b0542ded-3d93-4a78-a31b-f25fce8407e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.980823 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.980879 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.982660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.982998 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b0542ded-3d93-4a78-a31b-f25fce8407e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:49 crc kubenswrapper[4771]: I0123 14:20:49.984567 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh5p7\" (UniqueName: \"kubernetes.io/projected/b0542ded-3d93-4a78-a31b-f25fce8407e6-kube-api-access-sh5p7\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:50 crc kubenswrapper[4771]: I0123 14:20:50.025133 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2a2ba6f6-5092-48f3-9b40-92cc803f9310\") pod \"prometheus-metric-storage-0\" (UID: \"b0542ded-3d93-4a78-a31b-f25fce8407e6\") " pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:50 crc kubenswrapper[4771]: I0123 14:20:50.143427 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 14:20:50 crc kubenswrapper[4771]: I0123 14:20:50.692106 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 14:20:51 crc kubenswrapper[4771]: I0123 14:20:51.242650 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c312ce-f6df-4617-ba37-6675897fa368" path="/var/lib/kubelet/pods/c6c312ce-f6df-4617-ba37-6675897fa368/volumes" Jan 23 14:20:51 crc kubenswrapper[4771]: I0123 14:20:51.669652 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerStarted","Data":"e6e8dc09f08881b52470aba876e1b4ff6080a6d29af17a798cd0c6a648d04d4e"} Jan 23 14:20:52 crc kubenswrapper[4771]: I0123 14:20:52.465263 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:52 crc kubenswrapper[4771]: I0123 14:20:52.530760 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:52 crc kubenswrapper[4771]: I0123 14:20:52.708458 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:53 crc kubenswrapper[4771]: I0123 14:20:53.687308 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvtph" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="registry-server" containerID="cri-o://150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde" gracePeriod=2 Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.450776 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.468190 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk2l5\" (UniqueName: \"kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5\") pod \"25b7b64e-4e78-4fb0-8315-75fee67aebce\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.468587 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities\") pod \"25b7b64e-4e78-4fb0-8315-75fee67aebce\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.468677 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content\") pod \"25b7b64e-4e78-4fb0-8315-75fee67aebce\" (UID: \"25b7b64e-4e78-4fb0-8315-75fee67aebce\") " Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.470624 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities" (OuterVolumeSpecName: "utilities") pod "25b7b64e-4e78-4fb0-8315-75fee67aebce" (UID: "25b7b64e-4e78-4fb0-8315-75fee67aebce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.477735 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5" (OuterVolumeSpecName: "kube-api-access-lk2l5") pod "25b7b64e-4e78-4fb0-8315-75fee67aebce" (UID: "25b7b64e-4e78-4fb0-8315-75fee67aebce"). InnerVolumeSpecName "kube-api-access-lk2l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.571474 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.571526 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk2l5\" (UniqueName: \"kubernetes.io/projected/25b7b64e-4e78-4fb0-8315-75fee67aebce-kube-api-access-lk2l5\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.600116 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25b7b64e-4e78-4fb0-8315-75fee67aebce" (UID: "25b7b64e-4e78-4fb0-8315-75fee67aebce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.673999 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25b7b64e-4e78-4fb0-8315-75fee67aebce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.706231 4771 generic.go:334] "Generic (PLEG): container finished" podID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerID="150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde" exitCode=0 Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.706356 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerDied","Data":"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde"} Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.706396 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvtph" event={"ID":"25b7b64e-4e78-4fb0-8315-75fee67aebce","Type":"ContainerDied","Data":"06ddf5fddec4b94295673bea08060fb6fd7e3dfe243a5adb6dceba6ed34dcf8c"} Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.706435 4771 scope.go:117] "RemoveContainer" containerID="150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.706678 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvtph" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.763815 4771 scope.go:117] "RemoveContainer" containerID="27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.781764 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.793710 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvtph"] Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.797965 4771 scope.go:117] "RemoveContainer" containerID="87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.850108 4771 scope.go:117] "RemoveContainer" containerID="150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde" Jan 23 14:20:54 crc kubenswrapper[4771]: E0123 14:20:54.850611 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde\": container with ID starting with 150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde not found: ID does not exist" containerID="150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.850646 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde"} err="failed to get container status \"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde\": rpc error: code = NotFound desc = could not find container \"150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde\": container with ID starting with 150e569fc6b9d56ab76bc78e00a8215fefac80107cdd4298633ebf9386a81dde not found: ID does not exist" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.850670 4771 scope.go:117] "RemoveContainer" containerID="27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7" Jan 23 14:20:54 crc kubenswrapper[4771]: E0123 14:20:54.850960 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7\": container with ID starting with 27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7 not found: ID does not exist" containerID="27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.850998 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7"} err="failed to get container status \"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7\": rpc error: code = NotFound desc = could not find container \"27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7\": container with ID starting with 27e53771ee98be6650e46f0637b6fd9cbfdb4fba23876a94f29d6cfd1d88e7c7 not found: ID does not exist" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.851018 4771 scope.go:117] "RemoveContainer" containerID="87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8" Jan 23 14:20:54 crc kubenswrapper[4771]: E0123 14:20:54.851268 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8\": container with ID starting with 87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8 not found: ID does not exist" containerID="87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8" Jan 23 14:20:54 crc kubenswrapper[4771]: I0123 14:20:54.851304 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8"} err="failed to get container status \"87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8\": rpc error: code = NotFound desc = could not find container \"87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8\": container with ID starting with 87b77d3551db572dd267a9b168eb7752a9326c696997248b5b083ceec4da0ab8 not found: ID does not exist" Jan 23 14:20:55 crc kubenswrapper[4771]: I0123 14:20:55.244004 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" path="/var/lib/kubelet/pods/25b7b64e-4e78-4fb0-8315-75fee67aebce/volumes" Jan 23 14:20:55 crc kubenswrapper[4771]: I0123 14:20:55.730351 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerStarted","Data":"b592d4de4c3176e2d2e2ca8fd213db8e80f5b9960752934e8d65a91349a3cdce"} Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.312647 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.313176 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.313243 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.314494 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.314557 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" gracePeriod=600 Jan 23 14:21:00 crc kubenswrapper[4771]: E0123 14:21:00.480609 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.788610 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" exitCode=0 Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.788678 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725"} Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.788741 4771 scope.go:117] "RemoveContainer" containerID="232aa60aeca86ab158652639a3339367db4e0a8c92e9778fe7d9b53a5dcf7e08" Jan 23 14:21:00 crc kubenswrapper[4771]: I0123 14:21:00.789956 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:21:00 crc kubenswrapper[4771]: E0123 14:21:00.790563 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:03 crc kubenswrapper[4771]: I0123 14:21:03.828941 4771 generic.go:334] "Generic (PLEG): container finished" podID="b0542ded-3d93-4a78-a31b-f25fce8407e6" containerID="b592d4de4c3176e2d2e2ca8fd213db8e80f5b9960752934e8d65a91349a3cdce" exitCode=0 Jan 23 14:21:03 crc kubenswrapper[4771]: I0123 14:21:03.829651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerDied","Data":"b592d4de4c3176e2d2e2ca8fd213db8e80f5b9960752934e8d65a91349a3cdce"} Jan 23 14:21:04 crc kubenswrapper[4771]: I0123 14:21:04.846662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerStarted","Data":"6ed5b99d72a44dbe960ded86d1df692235c26b2879adc952275b402154b09553"} Jan 23 14:21:08 crc kubenswrapper[4771]: I0123 14:21:08.898099 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerStarted","Data":"4322c32961a5858d857e7cb7e9b1342c65431a8143f06c4757140643c4848122"} Jan 23 14:21:08 crc kubenswrapper[4771]: I0123 14:21:08.898927 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b0542ded-3d93-4a78-a31b-f25fce8407e6","Type":"ContainerStarted","Data":"38f7e4e3a3069bee5eaf9b320b16ef76e9e18c0607f4d217b8afbf18eaadde43"} Jan 23 14:21:08 crc kubenswrapper[4771]: I0123 14:21:08.936062 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.936034336 podStartE2EDuration="19.936034336s" podCreationTimestamp="2026-01-23 14:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:21:08.932075932 +0000 UTC m=+2909.954613557" watchObservedRunningTime="2026-01-23 14:21:08.936034336 +0000 UTC m=+2909.958571971" Jan 23 14:21:10 crc kubenswrapper[4771]: I0123 14:21:10.144516 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 14:21:16 crc kubenswrapper[4771]: I0123 14:21:16.228631 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:21:16 crc kubenswrapper[4771]: E0123 14:21:16.229747 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:20 crc kubenswrapper[4771]: I0123 14:21:20.144041 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 14:21:20 crc kubenswrapper[4771]: I0123 14:21:20.155751 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 14:21:21 crc kubenswrapper[4771]: I0123 14:21:21.032584 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 14:21:29 crc kubenswrapper[4771]: I0123 14:21:29.237038 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:21:29 crc kubenswrapper[4771]: E0123 14:21:29.238274 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:41 crc kubenswrapper[4771]: I0123 14:21:41.228896 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:21:41 crc kubenswrapper[4771]: E0123 14:21:41.231704 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.320195 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 14:21:47 crc kubenswrapper[4771]: E0123 14:21:47.321474 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="extract-content" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.321492 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="extract-content" Jan 23 14:21:47 crc kubenswrapper[4771]: E0123 14:21:47.321504 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="extract-utilities" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.321511 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="extract-utilities" Jan 23 14:21:47 crc kubenswrapper[4771]: E0123 14:21:47.321531 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="registry-server" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.321537 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="registry-server" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.321777 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b7b64e-4e78-4fb0-8315-75fee67aebce" containerName="registry-server" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.322734 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.325205 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rcp9d" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.326365 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.328142 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.328218 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.331721 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.342278 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.343041 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.343233 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-config-data\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446120 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446661 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-config-data\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446702 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446747 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446794 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.446913 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5qwl\" (UniqueName: \"kubernetes.io/projected/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-kube-api-access-f5qwl\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.447273 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.447500 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.447579 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.448483 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-config-data\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.455899 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.551185 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.551800 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.551868 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.551987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.552029 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.552179 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5qwl\" (UniqueName: \"kubernetes.io/projected/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-kube-api-access-f5qwl\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.552536 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.552903 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.554868 4771 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.555586 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.555613 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.572358 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5qwl\" (UniqueName: \"kubernetes.io/projected/4b1420d2-dfc0-492c-b21d-30eda7e8c59d-kube-api-access-f5qwl\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.588577 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"4b1420d2-dfc0-492c-b21d-30eda7e8c59d\") " pod="openstack/tempest-tests-tempest" Jan 23 14:21:47 crc kubenswrapper[4771]: I0123 14:21:47.654834 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 23 14:21:48 crc kubenswrapper[4771]: I0123 14:21:48.160653 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 23 14:21:48 crc kubenswrapper[4771]: I0123 14:21:48.165074 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:21:48 crc kubenswrapper[4771]: I0123 14:21:48.374112 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4b1420d2-dfc0-492c-b21d-30eda7e8c59d","Type":"ContainerStarted","Data":"769a86614f8e31869fa930ef3b40bab143750c85825134845406be00e2162d60"} Jan 23 14:21:54 crc kubenswrapper[4771]: I0123 14:21:54.229348 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:21:54 crc kubenswrapper[4771]: E0123 14:21:54.230468 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.182052 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.185316 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.208974 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.293459 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.293788 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7h89\" (UniqueName: \"kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.294373 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.396952 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.397184 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.397367 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7h89\" (UniqueName: \"kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.397521 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.397983 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.440504 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7h89\" (UniqueName: \"kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89\") pod \"redhat-marketplace-g6wp2\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:21:56 crc kubenswrapper[4771]: I0123 14:21:56.523225 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:00 crc kubenswrapper[4771]: I0123 14:22:00.548966 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:22:00 crc kubenswrapper[4771]: W0123 14:22:00.550321 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf76a9c4_9afd_499a_92aa_fc633104e4b9.slice/crio-f4a91de2e783ca10eba3d388e7125024f6ad025c61977a58971ad2e6bc8e7c79 WatchSource:0}: Error finding container f4a91de2e783ca10eba3d388e7125024f6ad025c61977a58971ad2e6bc8e7c79: Status 404 returned error can't find the container with id f4a91de2e783ca10eba3d388e7125024f6ad025c61977a58971ad2e6bc8e7c79 Jan 23 14:22:01 crc kubenswrapper[4771]: I0123 14:22:01.534454 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"4b1420d2-dfc0-492c-b21d-30eda7e8c59d","Type":"ContainerStarted","Data":"3de65491cf7fa56f170cddda1551f1b81b96224154f1701f1e8bc17af7db2be6"} Jan 23 14:22:01 crc kubenswrapper[4771]: I0123 14:22:01.538954 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerID="bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5" exitCode=0 Jan 23 14:22:01 crc kubenswrapper[4771]: I0123 14:22:01.539018 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerDied","Data":"bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5"} Jan 23 14:22:01 crc kubenswrapper[4771]: I0123 14:22:01.539042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerStarted","Data":"f4a91de2e783ca10eba3d388e7125024f6ad025c61977a58971ad2e6bc8e7c79"} Jan 23 14:22:01 crc kubenswrapper[4771]: I0123 14:22:01.590903 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.635804535 podStartE2EDuration="15.590877771s" podCreationTimestamp="2026-01-23 14:21:46 +0000 UTC" firstStartedPulling="2026-01-23 14:21:48.164798733 +0000 UTC m=+2949.187336358" lastFinishedPulling="2026-01-23 14:22:00.119871969 +0000 UTC m=+2961.142409594" observedRunningTime="2026-01-23 14:22:01.55983521 +0000 UTC m=+2962.582372835" watchObservedRunningTime="2026-01-23 14:22:01.590877771 +0000 UTC m=+2962.613415396" Jan 23 14:22:03 crc kubenswrapper[4771]: I0123 14:22:03.563812 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerID="4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de" exitCode=0 Jan 23 14:22:03 crc kubenswrapper[4771]: I0123 14:22:03.563891 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerDied","Data":"4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de"} Jan 23 14:22:04 crc kubenswrapper[4771]: I0123 14:22:04.576826 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerStarted","Data":"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f"} Jan 23 14:22:04 crc kubenswrapper[4771]: I0123 14:22:04.608701 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6wp2" podStartSLOduration=6.171441535 podStartE2EDuration="8.608675622s" podCreationTimestamp="2026-01-23 14:21:56 +0000 UTC" firstStartedPulling="2026-01-23 14:22:01.541576943 +0000 UTC m=+2962.564114568" lastFinishedPulling="2026-01-23 14:22:03.97881103 +0000 UTC m=+2965.001348655" observedRunningTime="2026-01-23 14:22:04.59624301 +0000 UTC m=+2965.618780645" watchObservedRunningTime="2026-01-23 14:22:04.608675622 +0000 UTC m=+2965.631213247" Jan 23 14:22:06 crc kubenswrapper[4771]: I0123 14:22:06.227988 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:22:06 crc kubenswrapper[4771]: E0123 14:22:06.228728 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:22:06 crc kubenswrapper[4771]: I0123 14:22:06.523569 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:06 crc kubenswrapper[4771]: I0123 14:22:06.523653 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:06 crc kubenswrapper[4771]: I0123 14:22:06.578677 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:16 crc kubenswrapper[4771]: I0123 14:22:16.578291 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:16 crc kubenswrapper[4771]: I0123 14:22:16.632088 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:22:16 crc kubenswrapper[4771]: I0123 14:22:16.730784 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6wp2" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="registry-server" containerID="cri-o://703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f" gracePeriod=2 Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.240875 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.411599 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities\") pod \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.411665 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7h89\" (UniqueName: \"kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89\") pod \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.411931 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content\") pod \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\" (UID: \"bf76a9c4-9afd-499a-92aa-fc633104e4b9\") " Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.412641 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities" (OuterVolumeSpecName: "utilities") pod "bf76a9c4-9afd-499a-92aa-fc633104e4b9" (UID: "bf76a9c4-9afd-499a-92aa-fc633104e4b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.418752 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89" (OuterVolumeSpecName: "kube-api-access-t7h89") pod "bf76a9c4-9afd-499a-92aa-fc633104e4b9" (UID: "bf76a9c4-9afd-499a-92aa-fc633104e4b9"). InnerVolumeSpecName "kube-api-access-t7h89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.437188 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf76a9c4-9afd-499a-92aa-fc633104e4b9" (UID: "bf76a9c4-9afd-499a-92aa-fc633104e4b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.515606 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.515677 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf76a9c4-9afd-499a-92aa-fc633104e4b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.515691 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7h89\" (UniqueName: \"kubernetes.io/projected/bf76a9c4-9afd-499a-92aa-fc633104e4b9-kube-api-access-t7h89\") on node \"crc\" DevicePath \"\"" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.742088 4771 generic.go:334] "Generic (PLEG): container finished" podID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerID="703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f" exitCode=0 Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.742149 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerDied","Data":"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f"} Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.742186 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wp2" event={"ID":"bf76a9c4-9afd-499a-92aa-fc633104e4b9","Type":"ContainerDied","Data":"f4a91de2e783ca10eba3d388e7125024f6ad025c61977a58971ad2e6bc8e7c79"} Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.742214 4771 scope.go:117] "RemoveContainer" containerID="703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.742435 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wp2" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.784653 4771 scope.go:117] "RemoveContainer" containerID="4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.789760 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.801278 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wp2"] Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.816062 4771 scope.go:117] "RemoveContainer" containerID="bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.885834 4771 scope.go:117] "RemoveContainer" containerID="703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f" Jan 23 14:22:17 crc kubenswrapper[4771]: E0123 14:22:17.890681 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f\": container with ID starting with 703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f not found: ID does not exist" containerID="703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.890753 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f"} err="failed to get container status \"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f\": rpc error: code = NotFound desc = could not find container \"703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f\": container with ID starting with 703d83ddd2b5ced75b33bbda7bd7ea87e5c7d87797256605d44944cbe6ef8c7f not found: ID does not exist" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.890787 4771 scope.go:117] "RemoveContainer" containerID="4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de" Jan 23 14:22:17 crc kubenswrapper[4771]: E0123 14:22:17.891562 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de\": container with ID starting with 4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de not found: ID does not exist" containerID="4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.891642 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de"} err="failed to get container status \"4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de\": rpc error: code = NotFound desc = could not find container \"4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de\": container with ID starting with 4ecdfb90a1caa9564749dd4183fb66efe8ab670c7232b25e2b09419a92e230de not found: ID does not exist" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.891689 4771 scope.go:117] "RemoveContainer" containerID="bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5" Jan 23 14:22:17 crc kubenswrapper[4771]: E0123 14:22:17.892060 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5\": container with ID starting with bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5 not found: ID does not exist" containerID="bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5" Jan 23 14:22:17 crc kubenswrapper[4771]: I0123 14:22:17.892089 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5"} err="failed to get container status \"bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5\": rpc error: code = NotFound desc = could not find container \"bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5\": container with ID starting with bd95b863a61089a6485c4d086377efdef5a48b76e9b638e6dced8f3972200cc5 not found: ID does not exist" Jan 23 14:22:19 crc kubenswrapper[4771]: I0123 14:22:19.249094 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:22:19 crc kubenswrapper[4771]: I0123 14:22:19.249090 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" path="/var/lib/kubelet/pods/bf76a9c4-9afd-499a-92aa-fc633104e4b9/volumes" Jan 23 14:22:19 crc kubenswrapper[4771]: E0123 14:22:19.249556 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:22:30 crc kubenswrapper[4771]: I0123 14:22:30.228332 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:22:30 crc kubenswrapper[4771]: E0123 14:22:30.229280 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:22:43 crc kubenswrapper[4771]: I0123 14:22:43.241108 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:22:43 crc kubenswrapper[4771]: E0123 14:22:43.242267 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:22:58 crc kubenswrapper[4771]: I0123 14:22:58.229173 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:22:58 crc kubenswrapper[4771]: E0123 14:22:58.230358 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:23:09 crc kubenswrapper[4771]: I0123 14:23:09.237538 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:23:09 crc kubenswrapper[4771]: E0123 14:23:09.238903 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:23:22 crc kubenswrapper[4771]: I0123 14:23:22.229140 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:23:22 crc kubenswrapper[4771]: E0123 14:23:22.230387 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:23:36 crc kubenswrapper[4771]: I0123 14:23:36.229154 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:23:36 crc kubenswrapper[4771]: E0123 14:23:36.230122 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:23:49 crc kubenswrapper[4771]: I0123 14:23:49.238303 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:23:49 crc kubenswrapper[4771]: E0123 14:23:49.239600 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:24:00 crc kubenswrapper[4771]: I0123 14:24:00.229110 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:24:00 crc kubenswrapper[4771]: E0123 14:24:00.230167 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:24:11 crc kubenswrapper[4771]: I0123 14:24:11.232798 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:24:11 crc kubenswrapper[4771]: E0123 14:24:11.234001 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:24:23 crc kubenswrapper[4771]: I0123 14:24:23.229273 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:24:23 crc kubenswrapper[4771]: E0123 14:24:23.230837 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:24:38 crc kubenswrapper[4771]: I0123 14:24:38.229446 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:24:38 crc kubenswrapper[4771]: E0123 14:24:38.230977 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:24:51 crc kubenswrapper[4771]: I0123 14:24:51.228967 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:24:51 crc kubenswrapper[4771]: E0123 14:24:51.231487 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:25:06 crc kubenswrapper[4771]: I0123 14:25:06.228617 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:25:06 crc kubenswrapper[4771]: E0123 14:25:06.229602 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:25:21 crc kubenswrapper[4771]: I0123 14:25:21.236873 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:25:21 crc kubenswrapper[4771]: E0123 14:25:21.238472 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:25:32 crc kubenswrapper[4771]: I0123 14:25:32.230230 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:25:32 crc kubenswrapper[4771]: E0123 14:25:32.231122 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:25:46 crc kubenswrapper[4771]: I0123 14:25:46.228584 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:25:46 crc kubenswrapper[4771]: E0123 14:25:46.229557 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:25:58 crc kubenswrapper[4771]: I0123 14:25:58.228328 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:25:58 crc kubenswrapper[4771]: E0123 14:25:58.229483 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:26:10 crc kubenswrapper[4771]: I0123 14:26:10.228120 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:26:11 crc kubenswrapper[4771]: I0123 14:26:11.299989 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686"} Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.475681 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:26:55 crc kubenswrapper[4771]: E0123 14:26:55.477000 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="extract-utilities" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.477021 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="extract-utilities" Jan 23 14:26:55 crc kubenswrapper[4771]: E0123 14:26:55.477049 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="registry-server" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.477057 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="registry-server" Jan 23 14:26:55 crc kubenswrapper[4771]: E0123 14:26:55.477073 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="extract-content" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.477079 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="extract-content" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.477334 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf76a9c4-9afd-499a-92aa-fc633104e4b9" containerName="registry-server" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.479116 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.490718 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.540767 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.541092 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqq79\" (UniqueName: \"kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.541127 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.643852 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqq79\" (UniqueName: \"kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.643921 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.644034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.644634 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.644660 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.677747 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqq79\" (UniqueName: \"kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79\") pod \"certified-operators-546qf\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:55 crc kubenswrapper[4771]: I0123 14:26:55.811054 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:26:56 crc kubenswrapper[4771]: I0123 14:26:56.433460 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:26:56 crc kubenswrapper[4771]: I0123 14:26:56.771496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerStarted","Data":"72d497b16b619cfb4460d360cdb97a7d121468417891ba52aba58f27e96761a6"} Jan 23 14:26:57 crc kubenswrapper[4771]: I0123 14:26:57.783744 4771 generic.go:334] "Generic (PLEG): container finished" podID="6fc80d27-3af9-481b-a850-946531f63b62" containerID="e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff" exitCode=0 Jan 23 14:26:57 crc kubenswrapper[4771]: I0123 14:26:57.783833 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerDied","Data":"e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff"} Jan 23 14:26:57 crc kubenswrapper[4771]: I0123 14:26:57.786485 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:26:58 crc kubenswrapper[4771]: I0123 14:26:58.796978 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerStarted","Data":"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5"} Jan 23 14:26:59 crc kubenswrapper[4771]: E0123 14:26:59.978815 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fc80d27_3af9_481b_a850_946531f63b62.slice/crio-b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:27:00 crc kubenswrapper[4771]: I0123 14:27:00.820677 4771 generic.go:334] "Generic (PLEG): container finished" podID="6fc80d27-3af9-481b-a850-946531f63b62" containerID="b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5" exitCode=0 Jan 23 14:27:00 crc kubenswrapper[4771]: I0123 14:27:00.820762 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerDied","Data":"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5"} Jan 23 14:27:01 crc kubenswrapper[4771]: I0123 14:27:01.838261 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerStarted","Data":"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2"} Jan 23 14:27:01 crc kubenswrapper[4771]: I0123 14:27:01.871824 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-546qf" podStartSLOduration=3.4343998510000002 podStartE2EDuration="6.871796357s" podCreationTimestamp="2026-01-23 14:26:55 +0000 UTC" firstStartedPulling="2026-01-23 14:26:57.786230904 +0000 UTC m=+3258.808768519" lastFinishedPulling="2026-01-23 14:27:01.2236274 +0000 UTC m=+3262.246165025" observedRunningTime="2026-01-23 14:27:01.861350521 +0000 UTC m=+3262.883888146" watchObservedRunningTime="2026-01-23 14:27:01.871796357 +0000 UTC m=+3262.894333982" Jan 23 14:27:05 crc kubenswrapper[4771]: I0123 14:27:05.811371 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:05 crc kubenswrapper[4771]: I0123 14:27:05.812050 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:05 crc kubenswrapper[4771]: I0123 14:27:05.860510 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:15 crc kubenswrapper[4771]: I0123 14:27:15.873280 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:15 crc kubenswrapper[4771]: I0123 14:27:15.933068 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.000260 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-546qf" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="registry-server" containerID="cri-o://46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2" gracePeriod=2 Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.578318 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.586812 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqq79\" (UniqueName: \"kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79\") pod \"6fc80d27-3af9-481b-a850-946531f63b62\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.586961 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities\") pod \"6fc80d27-3af9-481b-a850-946531f63b62\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.587136 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content\") pod \"6fc80d27-3af9-481b-a850-946531f63b62\" (UID: \"6fc80d27-3af9-481b-a850-946531f63b62\") " Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.589437 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities" (OuterVolumeSpecName: "utilities") pod "6fc80d27-3af9-481b-a850-946531f63b62" (UID: "6fc80d27-3af9-481b-a850-946531f63b62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.624529 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79" (OuterVolumeSpecName: "kube-api-access-qqq79") pod "6fc80d27-3af9-481b-a850-946531f63b62" (UID: "6fc80d27-3af9-481b-a850-946531f63b62"). InnerVolumeSpecName "kube-api-access-qqq79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.672764 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fc80d27-3af9-481b-a850-946531f63b62" (UID: "6fc80d27-3af9-481b-a850-946531f63b62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.691189 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.691230 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fc80d27-3af9-481b-a850-946531f63b62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:16 crc kubenswrapper[4771]: I0123 14:27:16.691245 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqq79\" (UniqueName: \"kubernetes.io/projected/6fc80d27-3af9-481b-a850-946531f63b62-kube-api-access-qqq79\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.042031 4771 generic.go:334] "Generic (PLEG): container finished" podID="6fc80d27-3af9-481b-a850-946531f63b62" containerID="46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2" exitCode=0 Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.042583 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerDied","Data":"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2"} Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.042643 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-546qf" event={"ID":"6fc80d27-3af9-481b-a850-946531f63b62","Type":"ContainerDied","Data":"72d497b16b619cfb4460d360cdb97a7d121468417891ba52aba58f27e96761a6"} Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.042668 4771 scope.go:117] "RemoveContainer" containerID="46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.042979 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-546qf" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.119643 4771 scope.go:117] "RemoveContainer" containerID="b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.125507 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.169892 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-546qf"] Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.239585 4771 scope.go:117] "RemoveContainer" containerID="e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.257387 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc80d27-3af9-481b-a850-946531f63b62" path="/var/lib/kubelet/pods/6fc80d27-3af9-481b-a850-946531f63b62/volumes" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.294919 4771 scope.go:117] "RemoveContainer" containerID="46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2" Jan 23 14:27:17 crc kubenswrapper[4771]: E0123 14:27:17.295569 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2\": container with ID starting with 46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2 not found: ID does not exist" containerID="46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.295626 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2"} err="failed to get container status \"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2\": rpc error: code = NotFound desc = could not find container \"46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2\": container with ID starting with 46c8f7943a97401c639e5e592bb688934f2ebac20509ff658bf6721003010bc2 not found: ID does not exist" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.295660 4771 scope.go:117] "RemoveContainer" containerID="b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5" Jan 23 14:27:17 crc kubenswrapper[4771]: E0123 14:27:17.296166 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5\": container with ID starting with b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5 not found: ID does not exist" containerID="b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.296247 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5"} err="failed to get container status \"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5\": rpc error: code = NotFound desc = could not find container \"b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5\": container with ID starting with b9b6261a94ef1d4c3f56dd6ddfd8bd2fcd201e9ee8b49f7fde683a1b30a0ecd5 not found: ID does not exist" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.296305 4771 scope.go:117] "RemoveContainer" containerID="e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff" Jan 23 14:27:17 crc kubenswrapper[4771]: E0123 14:27:17.296925 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff\": container with ID starting with e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff not found: ID does not exist" containerID="e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff" Jan 23 14:27:17 crc kubenswrapper[4771]: I0123 14:27:17.296967 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff"} err="failed to get container status \"e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff\": rpc error: code = NotFound desc = could not find container \"e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff\": container with ID starting with e9a318461a6a3e755c9aec5310c3ea1260637d59ec51c06a1bdab531d6c97aff not found: ID does not exist" Jan 23 14:28:30 crc kubenswrapper[4771]: I0123 14:28:30.312571 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:28:30 crc kubenswrapper[4771]: I0123 14:28:30.313465 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:00 crc kubenswrapper[4771]: I0123 14:29:00.312546 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:29:00 crc kubenswrapper[4771]: I0123 14:29:00.313399 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.312274 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.313092 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.313158 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.313827 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.313980 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686" gracePeriod=600 Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.478600 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686"} Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.478676 4771 scope.go:117] "RemoveContainer" containerID="0169782d8b568197844f70828fc6ac22653603169e7aa5c3618c71485b0e6725" Jan 23 14:29:30 crc kubenswrapper[4771]: I0123 14:29:30.478552 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686" exitCode=0 Jan 23 14:29:31 crc kubenswrapper[4771]: I0123 14:29:31.495072 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2"} Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.154691 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5"] Jan 23 14:30:00 crc kubenswrapper[4771]: E0123 14:30:00.156170 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.156191 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4771]: E0123 14:30:00.156231 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.156237 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4771]: E0123 14:30:00.156247 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.156253 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.156516 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc80d27-3af9-481b-a850-946531f63b62" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.157445 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.159894 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.165753 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.168873 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5"] Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.222678 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.223031 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.223121 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvgc\" (UniqueName: \"kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.325800 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.326168 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvgc\" (UniqueName: \"kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.326382 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.327110 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.336003 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.354725 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvgc\" (UniqueName: \"kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc\") pod \"collect-profiles-29486310-g5kt5\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:00 crc kubenswrapper[4771]: I0123 14:30:00.494471 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:01 crc kubenswrapper[4771]: I0123 14:30:01.026868 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5"] Jan 23 14:30:01 crc kubenswrapper[4771]: I0123 14:30:01.850182 4771 generic.go:334] "Generic (PLEG): container finished" podID="250f782e-f7d2-4bd3-9359-fe9e97d868cc" containerID="da1d11faf4a6ff6d35b8206148ec4b43acfc4c00d4e2e445ec8099bacc0365b1" exitCode=0 Jan 23 14:30:01 crc kubenswrapper[4771]: I0123 14:30:01.850257 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" event={"ID":"250f782e-f7d2-4bd3-9359-fe9e97d868cc","Type":"ContainerDied","Data":"da1d11faf4a6ff6d35b8206148ec4b43acfc4c00d4e2e445ec8099bacc0365b1"} Jan 23 14:30:01 crc kubenswrapper[4771]: I0123 14:30:01.850298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" event={"ID":"250f782e-f7d2-4bd3-9359-fe9e97d868cc","Type":"ContainerStarted","Data":"52db95d0ac25e8556498d1b28ccaf9c67cbae755ea4e5398302b7ce53f338ada"} Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.264716 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.427628 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume\") pod \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.427715 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume\") pod \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.427815 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwvgc\" (UniqueName: \"kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc\") pod \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\" (UID: \"250f782e-f7d2-4bd3-9359-fe9e97d868cc\") " Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.432338 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "250f782e-f7d2-4bd3-9359-fe9e97d868cc" (UID: "250f782e-f7d2-4bd3-9359-fe9e97d868cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.465658 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "250f782e-f7d2-4bd3-9359-fe9e97d868cc" (UID: "250f782e-f7d2-4bd3-9359-fe9e97d868cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.468684 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc" (OuterVolumeSpecName: "kube-api-access-vwvgc") pod "250f782e-f7d2-4bd3-9359-fe9e97d868cc" (UID: "250f782e-f7d2-4bd3-9359-fe9e97d868cc"). InnerVolumeSpecName "kube-api-access-vwvgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.531168 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwvgc\" (UniqueName: \"kubernetes.io/projected/250f782e-f7d2-4bd3-9359-fe9e97d868cc-kube-api-access-vwvgc\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.531459 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/250f782e-f7d2-4bd3-9359-fe9e97d868cc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.531518 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/250f782e-f7d2-4bd3-9359-fe9e97d868cc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.873564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" event={"ID":"250f782e-f7d2-4bd3-9359-fe9e97d868cc","Type":"ContainerDied","Data":"52db95d0ac25e8556498d1b28ccaf9c67cbae755ea4e5398302b7ce53f338ada"} Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.873613 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52db95d0ac25e8556498d1b28ccaf9c67cbae755ea4e5398302b7ce53f338ada" Jan 23 14:30:03 crc kubenswrapper[4771]: I0123 14:30:03.873633 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5" Jan 23 14:30:04 crc kubenswrapper[4771]: I0123 14:30:04.357986 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw"] Jan 23 14:30:04 crc kubenswrapper[4771]: I0123 14:30:04.369740 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486265-4qtnw"] Jan 23 14:30:05 crc kubenswrapper[4771]: I0123 14:30:05.245602 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b22cafb9-93b9-4c74-878a-3b61fc86aa40" path="/var/lib/kubelet/pods/b22cafb9-93b9-4c74-878a-3b61fc86aa40/volumes" Jan 23 14:31:00 crc kubenswrapper[4771]: I0123 14:31:00.212091 4771 scope.go:117] "RemoveContainer" containerID="2cd19e2a64b39e22b1ae456f4310d91119158d955dd61e86d3e5582556b5e080" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.362859 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:36 crc kubenswrapper[4771]: E0123 14:31:36.364160 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250f782e-f7d2-4bd3-9359-fe9e97d868cc" containerName="collect-profiles" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.364179 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="250f782e-f7d2-4bd3-9359-fe9e97d868cc" containerName="collect-profiles" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.364561 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="250f782e-f7d2-4bd3-9359-fe9e97d868cc" containerName="collect-profiles" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.366247 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.372873 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.441596 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhhx\" (UniqueName: \"kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.441773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.441827 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.545028 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhhx\" (UniqueName: \"kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.545254 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.545288 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.546018 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.546277 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.571318 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhhx\" (UniqueName: \"kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx\") pod \"community-operators-pjwdd\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:36 crc kubenswrapper[4771]: I0123 14:31:36.706312 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:37 crc kubenswrapper[4771]: I0123 14:31:37.362393 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:37 crc kubenswrapper[4771]: E0123 14:31:37.860827 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42095aba_3e4c_4f58_a879_9a90af0d1d98.slice/crio-d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42095aba_3e4c_4f58_a879_9a90af0d1d98.slice/crio-conmon-d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:31:37 crc kubenswrapper[4771]: I0123 14:31:37.907200 4771 generic.go:334] "Generic (PLEG): container finished" podID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerID="d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81" exitCode=0 Jan 23 14:31:37 crc kubenswrapper[4771]: I0123 14:31:37.907254 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerDied","Data":"d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81"} Jan 23 14:31:37 crc kubenswrapper[4771]: I0123 14:31:37.907284 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerStarted","Data":"58666ee02b8686c64fdc0aaeafa386a805e2d76188c1cd980af60844b2826122"} Jan 23 14:31:39 crc kubenswrapper[4771]: I0123 14:31:39.936998 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerStarted","Data":"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d"} Jan 23 14:31:41 crc kubenswrapper[4771]: I0123 14:31:41.960980 4771 generic.go:334] "Generic (PLEG): container finished" podID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerID="1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d" exitCode=0 Jan 23 14:31:41 crc kubenswrapper[4771]: I0123 14:31:41.961040 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerDied","Data":"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d"} Jan 23 14:31:43 crc kubenswrapper[4771]: I0123 14:31:43.985871 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerStarted","Data":"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955"} Jan 23 14:31:44 crc kubenswrapper[4771]: I0123 14:31:44.006300 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pjwdd" podStartSLOduration=3.093928831 podStartE2EDuration="8.006277722s" podCreationTimestamp="2026-01-23 14:31:36 +0000 UTC" firstStartedPulling="2026-01-23 14:31:37.910486487 +0000 UTC m=+3538.933024112" lastFinishedPulling="2026-01-23 14:31:42.822835358 +0000 UTC m=+3543.845373003" observedRunningTime="2026-01-23 14:31:44.003372861 +0000 UTC m=+3545.025910486" watchObservedRunningTime="2026-01-23 14:31:44.006277722 +0000 UTC m=+3545.028815347" Jan 23 14:31:46 crc kubenswrapper[4771]: I0123 14:31:46.706845 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:46 crc kubenswrapper[4771]: I0123 14:31:46.707522 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:46 crc kubenswrapper[4771]: I0123 14:31:46.757020 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:56 crc kubenswrapper[4771]: I0123 14:31:56.759449 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:56 crc kubenswrapper[4771]: I0123 14:31:56.821504 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.134327 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pjwdd" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="registry-server" containerID="cri-o://147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955" gracePeriod=2 Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.630394 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.801092 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities\") pod \"42095aba-3e4c-4f58-a879-9a90af0d1d98\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.801175 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content\") pod \"42095aba-3e4c-4f58-a879-9a90af0d1d98\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.801337 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brhhx\" (UniqueName: \"kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx\") pod \"42095aba-3e4c-4f58-a879-9a90af0d1d98\" (UID: \"42095aba-3e4c-4f58-a879-9a90af0d1d98\") " Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.802523 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities" (OuterVolumeSpecName: "utilities") pod "42095aba-3e4c-4f58-a879-9a90af0d1d98" (UID: "42095aba-3e4c-4f58-a879-9a90af0d1d98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.812451 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx" (OuterVolumeSpecName: "kube-api-access-brhhx") pod "42095aba-3e4c-4f58-a879-9a90af0d1d98" (UID: "42095aba-3e4c-4f58-a879-9a90af0d1d98"). InnerVolumeSpecName "kube-api-access-brhhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.871380 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42095aba-3e4c-4f58-a879-9a90af0d1d98" (UID: "42095aba-3e4c-4f58-a879-9a90af0d1d98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.905215 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.905275 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42095aba-3e4c-4f58-a879-9a90af0d1d98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:57 crc kubenswrapper[4771]: I0123 14:31:57.905299 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brhhx\" (UniqueName: \"kubernetes.io/projected/42095aba-3e4c-4f58-a879-9a90af0d1d98-kube-api-access-brhhx\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.147983 4771 generic.go:334] "Generic (PLEG): container finished" podID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerID="147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955" exitCode=0 Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.148039 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerDied","Data":"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955"} Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.148051 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjwdd" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.148078 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjwdd" event={"ID":"42095aba-3e4c-4f58-a879-9a90af0d1d98","Type":"ContainerDied","Data":"58666ee02b8686c64fdc0aaeafa386a805e2d76188c1cd980af60844b2826122"} Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.148107 4771 scope.go:117] "RemoveContainer" containerID="147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.170876 4771 scope.go:117] "RemoveContainer" containerID="1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.201384 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.206330 4771 scope.go:117] "RemoveContainer" containerID="d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.226544 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pjwdd"] Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.265686 4771 scope.go:117] "RemoveContainer" containerID="147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955" Jan 23 14:31:58 crc kubenswrapper[4771]: E0123 14:31:58.266292 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955\": container with ID starting with 147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955 not found: ID does not exist" containerID="147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.266474 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955"} err="failed to get container status \"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955\": rpc error: code = NotFound desc = could not find container \"147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955\": container with ID starting with 147ce503373dfa584abbe9897952e4d8ab5afe1f40a36dce61e6817186c3f955 not found: ID does not exist" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.266619 4771 scope.go:117] "RemoveContainer" containerID="1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d" Jan 23 14:31:58 crc kubenswrapper[4771]: E0123 14:31:58.267248 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d\": container with ID starting with 1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d not found: ID does not exist" containerID="1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.267281 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d"} err="failed to get container status \"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d\": rpc error: code = NotFound desc = could not find container \"1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d\": container with ID starting with 1865ab142bfe0826358f349d992651361cd762c9efcb474112b41ffd236d992d not found: ID does not exist" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.267298 4771 scope.go:117] "RemoveContainer" containerID="d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81" Jan 23 14:31:58 crc kubenswrapper[4771]: E0123 14:31:58.267705 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81\": container with ID starting with d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81 not found: ID does not exist" containerID="d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81" Jan 23 14:31:58 crc kubenswrapper[4771]: I0123 14:31:58.267737 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81"} err="failed to get container status \"d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81\": rpc error: code = NotFound desc = could not find container \"d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81\": container with ID starting with d369e94598721135c1984ea9eea306f09928e01084f13e647f6c2a4f6067ed81 not found: ID does not exist" Jan 23 14:31:59 crc kubenswrapper[4771]: I0123 14:31:59.247330 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" path="/var/lib/kubelet/pods/42095aba-3e4c-4f58-a879-9a90af0d1d98/volumes" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.312443 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.312535 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.818935 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:00 crc kubenswrapper[4771]: E0123 14:32:00.819633 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="registry-server" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.819658 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="registry-server" Jan 23 14:32:00 crc kubenswrapper[4771]: E0123 14:32:00.819672 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="extract-content" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.819684 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="extract-content" Jan 23 14:32:00 crc kubenswrapper[4771]: E0123 14:32:00.819728 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="extract-utilities" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.819736 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="extract-utilities" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.820033 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="42095aba-3e4c-4f58-a879-9a90af0d1d98" containerName="registry-server" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.822064 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.835723 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.991785 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.992362 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:00 crc kubenswrapper[4771]: I0123 14:32:00.992455 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgttb\" (UniqueName: \"kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.095734 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.095945 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgttb\" (UniqueName: \"kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.096146 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.096697 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.096730 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.123719 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgttb\" (UniqueName: \"kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb\") pod \"redhat-operators-4769t\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.146148 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:01 crc kubenswrapper[4771]: I0123 14:32:01.767006 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:01 crc kubenswrapper[4771]: W0123 14:32:01.772273 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6952158_945c_4d27_9cf4_a4c0d28b4127.slice/crio-afa551be8774dbaa9811e74ed7361545c06894cde41dd65ec5f55013c6fff44d WatchSource:0}: Error finding container afa551be8774dbaa9811e74ed7361545c06894cde41dd65ec5f55013c6fff44d: Status 404 returned error can't find the container with id afa551be8774dbaa9811e74ed7361545c06894cde41dd65ec5f55013c6fff44d Jan 23 14:32:02 crc kubenswrapper[4771]: I0123 14:32:02.194198 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerID="67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d" exitCode=0 Jan 23 14:32:02 crc kubenswrapper[4771]: I0123 14:32:02.194253 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerDied","Data":"67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d"} Jan 23 14:32:02 crc kubenswrapper[4771]: I0123 14:32:02.194320 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerStarted","Data":"afa551be8774dbaa9811e74ed7361545c06894cde41dd65ec5f55013c6fff44d"} Jan 23 14:32:02 crc kubenswrapper[4771]: I0123 14:32:02.197281 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.209640 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerStarted","Data":"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04"} Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.222066 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.225108 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.243512 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.360582 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.360648 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x492h\" (UniqueName: \"kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.360773 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.463209 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.463777 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x492h\" (UniqueName: \"kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.463855 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.463982 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.464471 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.489294 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x492h\" (UniqueName: \"kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h\") pod \"redhat-marketplace-59cj9\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:03 crc kubenswrapper[4771]: I0123 14:32:03.545788 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:04 crc kubenswrapper[4771]: I0123 14:32:04.158879 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:04 crc kubenswrapper[4771]: W0123 14:32:04.191819 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f48e04c_f8e3_4bc9_93d6_28ffd59d9528.slice/crio-154ac220dd6fa0eb2c695bb9f7c2db01ba1f244e04ee9ceae2c1d482303ab54d WatchSource:0}: Error finding container 154ac220dd6fa0eb2c695bb9f7c2db01ba1f244e04ee9ceae2c1d482303ab54d: Status 404 returned error can't find the container with id 154ac220dd6fa0eb2c695bb9f7c2db01ba1f244e04ee9ceae2c1d482303ab54d Jan 23 14:32:04 crc kubenswrapper[4771]: I0123 14:32:04.222608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerStarted","Data":"154ac220dd6fa0eb2c695bb9f7c2db01ba1f244e04ee9ceae2c1d482303ab54d"} Jan 23 14:32:06 crc kubenswrapper[4771]: I0123 14:32:06.250984 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerID="cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3" exitCode=0 Jan 23 14:32:06 crc kubenswrapper[4771]: I0123 14:32:06.251090 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerDied","Data":"cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3"} Jan 23 14:32:10 crc kubenswrapper[4771]: I0123 14:32:10.299322 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerID="1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04" exitCode=0 Jan 23 14:32:10 crc kubenswrapper[4771]: I0123 14:32:10.299364 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerDied","Data":"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04"} Jan 23 14:32:10 crc kubenswrapper[4771]: I0123 14:32:10.306035 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerStarted","Data":"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b"} Jan 23 14:32:11 crc kubenswrapper[4771]: I0123 14:32:11.318862 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerID="ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b" exitCode=0 Jan 23 14:32:11 crc kubenswrapper[4771]: I0123 14:32:11.318910 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerDied","Data":"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b"} Jan 23 14:32:11 crc kubenswrapper[4771]: I0123 14:32:11.322626 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerStarted","Data":"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41"} Jan 23 14:32:11 crc kubenswrapper[4771]: I0123 14:32:11.379888 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4769t" podStartSLOduration=2.859874306 podStartE2EDuration="11.379861365s" podCreationTimestamp="2026-01-23 14:32:00 +0000 UTC" firstStartedPulling="2026-01-23 14:32:02.197043592 +0000 UTC m=+3563.219581217" lastFinishedPulling="2026-01-23 14:32:10.717030651 +0000 UTC m=+3571.739568276" observedRunningTime="2026-01-23 14:32:11.36785023 +0000 UTC m=+3572.390387855" watchObservedRunningTime="2026-01-23 14:32:11.379861365 +0000 UTC m=+3572.402398990" Jan 23 14:32:12 crc kubenswrapper[4771]: I0123 14:32:12.338846 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerStarted","Data":"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60"} Jan 23 14:32:12 crc kubenswrapper[4771]: I0123 14:32:12.367370 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-59cj9" podStartSLOduration=3.9084962880000003 podStartE2EDuration="9.367349666s" podCreationTimestamp="2026-01-23 14:32:03 +0000 UTC" firstStartedPulling="2026-01-23 14:32:06.253956892 +0000 UTC m=+3567.276494517" lastFinishedPulling="2026-01-23 14:32:11.71281028 +0000 UTC m=+3572.735347895" observedRunningTime="2026-01-23 14:32:12.357830249 +0000 UTC m=+3573.380367884" watchObservedRunningTime="2026-01-23 14:32:12.367349666 +0000 UTC m=+3573.389887291" Jan 23 14:32:13 crc kubenswrapper[4771]: I0123 14:32:13.546651 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:13 crc kubenswrapper[4771]: I0123 14:32:13.547520 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:14 crc kubenswrapper[4771]: I0123 14:32:14.598703 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-59cj9" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="registry-server" probeResult="failure" output=< Jan 23 14:32:14 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:32:14 crc kubenswrapper[4771]: > Jan 23 14:32:21 crc kubenswrapper[4771]: I0123 14:32:21.146696 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:21 crc kubenswrapper[4771]: I0123 14:32:21.147611 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:21 crc kubenswrapper[4771]: I0123 14:32:21.215842 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:21 crc kubenswrapper[4771]: I0123 14:32:21.520817 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:21 crc kubenswrapper[4771]: I0123 14:32:21.581677 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:23 crc kubenswrapper[4771]: I0123 14:32:23.479826 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4769t" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="registry-server" containerID="cri-o://4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41" gracePeriod=2 Jan 23 14:32:23 crc kubenswrapper[4771]: I0123 14:32:23.616735 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:23 crc kubenswrapper[4771]: I0123 14:32:23.694384 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:23 crc kubenswrapper[4771]: I0123 14:32:23.996480 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.040052 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgttb\" (UniqueName: \"kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb\") pod \"e6952158-945c-4d27-9cf4-a4c0d28b4127\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.040358 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities\") pod \"e6952158-945c-4d27-9cf4-a4c0d28b4127\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.040627 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content\") pod \"e6952158-945c-4d27-9cf4-a4c0d28b4127\" (UID: \"e6952158-945c-4d27-9cf4-a4c0d28b4127\") " Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.041481 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities" (OuterVolumeSpecName: "utilities") pod "e6952158-945c-4d27-9cf4-a4c0d28b4127" (UID: "e6952158-945c-4d27-9cf4-a4c0d28b4127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.043345 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.057772 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb" (OuterVolumeSpecName: "kube-api-access-lgttb") pod "e6952158-945c-4d27-9cf4-a4c0d28b4127" (UID: "e6952158-945c-4d27-9cf4-a4c0d28b4127"). InnerVolumeSpecName "kube-api-access-lgttb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.145159 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgttb\" (UniqueName: \"kubernetes.io/projected/e6952158-945c-4d27-9cf4-a4c0d28b4127-kube-api-access-lgttb\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.186741 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6952158-945c-4d27-9cf4-a4c0d28b4127" (UID: "e6952158-945c-4d27-9cf4-a4c0d28b4127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.248013 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6952158-945c-4d27-9cf4-a4c0d28b4127-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.494458 4771 generic.go:334] "Generic (PLEG): container finished" podID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerID="4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41" exitCode=0 Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.494528 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4769t" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.494616 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerDied","Data":"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41"} Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.495757 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4769t" event={"ID":"e6952158-945c-4d27-9cf4-a4c0d28b4127","Type":"ContainerDied","Data":"afa551be8774dbaa9811e74ed7361545c06894cde41dd65ec5f55013c6fff44d"} Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.495803 4771 scope.go:117] "RemoveContainer" containerID="4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.532936 4771 scope.go:117] "RemoveContainer" containerID="1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.543156 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.560557 4771 scope.go:117] "RemoveContainer" containerID="67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.560756 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4769t"] Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.661871 4771 scope.go:117] "RemoveContainer" containerID="4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41" Jan 23 14:32:24 crc kubenswrapper[4771]: E0123 14:32:24.662483 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41\": container with ID starting with 4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41 not found: ID does not exist" containerID="4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.662520 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41"} err="failed to get container status \"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41\": rpc error: code = NotFound desc = could not find container \"4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41\": container with ID starting with 4d8a35ccf6960f0e7d9a1ec5a3a0d4f049de2263f32b3f03d1edf1679ffd4d41 not found: ID does not exist" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.662543 4771 scope.go:117] "RemoveContainer" containerID="1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04" Jan 23 14:32:24 crc kubenswrapper[4771]: E0123 14:32:24.662784 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04\": container with ID starting with 1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04 not found: ID does not exist" containerID="1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.662808 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04"} err="failed to get container status \"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04\": rpc error: code = NotFound desc = could not find container \"1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04\": container with ID starting with 1998eb1ec4b240602dee5504e634ff38ccd994fc41c30d51cbb32aa8347a6e04 not found: ID does not exist" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.662821 4771 scope.go:117] "RemoveContainer" containerID="67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d" Jan 23 14:32:24 crc kubenswrapper[4771]: E0123 14:32:24.664006 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d\": container with ID starting with 67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d not found: ID does not exist" containerID="67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d" Jan 23 14:32:24 crc kubenswrapper[4771]: I0123 14:32:24.664036 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d"} err="failed to get container status \"67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d\": rpc error: code = NotFound desc = could not find container \"67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d\": container with ID starting with 67a9a1a30083b0d80ea5c59a36555597bfcee9068822c65e56316d2a7259e86d not found: ID does not exist" Jan 23 14:32:25 crc kubenswrapper[4771]: I0123 14:32:25.246032 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" path="/var/lib/kubelet/pods/e6952158-945c-4d27-9cf4-a4c0d28b4127/volumes" Jan 23 14:32:25 crc kubenswrapper[4771]: I0123 14:32:25.669610 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:25 crc kubenswrapper[4771]: I0123 14:32:25.669920 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-59cj9" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="registry-server" containerID="cri-o://5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60" gracePeriod=2 Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.230051 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.302616 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities\") pod \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.302794 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content\") pod \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.302903 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x492h\" (UniqueName: \"kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h\") pod \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\" (UID: \"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528\") " Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.304064 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities" (OuterVolumeSpecName: "utilities") pod "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" (UID: "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.314454 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h" (OuterVolumeSpecName: "kube-api-access-x492h") pod "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" (UID: "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528"). InnerVolumeSpecName "kube-api-access-x492h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.342646 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" (UID: "8f48e04c-f8e3-4bc9-93d6-28ffd59d9528"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.407425 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.407750 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.407883 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x492h\" (UniqueName: \"kubernetes.io/projected/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528-kube-api-access-x492h\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.523638 4771 generic.go:334] "Generic (PLEG): container finished" podID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerID="5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60" exitCode=0 Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.523712 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerDied","Data":"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60"} Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.523766 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-59cj9" event={"ID":"8f48e04c-f8e3-4bc9-93d6-28ffd59d9528","Type":"ContainerDied","Data":"154ac220dd6fa0eb2c695bb9f7c2db01ba1f244e04ee9ceae2c1d482303ab54d"} Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.523788 4771 scope.go:117] "RemoveContainer" containerID="5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.523958 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-59cj9" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.557394 4771 scope.go:117] "RemoveContainer" containerID="ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.574262 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.591256 4771 scope.go:117] "RemoveContainer" containerID="cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.591264 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-59cj9"] Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.644867 4771 scope.go:117] "RemoveContainer" containerID="5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60" Jan 23 14:32:26 crc kubenswrapper[4771]: E0123 14:32:26.646533 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60\": container with ID starting with 5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60 not found: ID does not exist" containerID="5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.646639 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60"} err="failed to get container status \"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60\": rpc error: code = NotFound desc = could not find container \"5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60\": container with ID starting with 5d690c3d246c44b9b1196033cbb9b4f05f9d59ec9d08daf3030b40d8c4b8ab60 not found: ID does not exist" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.646684 4771 scope.go:117] "RemoveContainer" containerID="ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b" Jan 23 14:32:26 crc kubenswrapper[4771]: E0123 14:32:26.647359 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b\": container with ID starting with ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b not found: ID does not exist" containerID="ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.647388 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b"} err="failed to get container status \"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b\": rpc error: code = NotFound desc = could not find container \"ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b\": container with ID starting with ecf2099788ed0b6b402f9af59b611044a1f53ab581c76e64c7acd1a1f458e71b not found: ID does not exist" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.647425 4771 scope.go:117] "RemoveContainer" containerID="cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3" Jan 23 14:32:26 crc kubenswrapper[4771]: E0123 14:32:26.647800 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3\": container with ID starting with cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3 not found: ID does not exist" containerID="cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3" Jan 23 14:32:26 crc kubenswrapper[4771]: I0123 14:32:26.647829 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3"} err="failed to get container status \"cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3\": rpc error: code = NotFound desc = could not find container \"cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3\": container with ID starting with cc3b5e4adc1971beb1fa37e5276c5fc596a96f0bf2dca18e63bb473baf979db3 not found: ID does not exist" Jan 23 14:32:27 crc kubenswrapper[4771]: I0123 14:32:27.242710 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" path="/var/lib/kubelet/pods/8f48e04c-f8e3-4bc9-93d6-28ffd59d9528/volumes" Jan 23 14:32:30 crc kubenswrapper[4771]: I0123 14:32:30.311956 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:32:30 crc kubenswrapper[4771]: I0123 14:32:30.312719 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.312116 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.312893 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.312946 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.314000 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.314065 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" gracePeriod=600 Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.896290 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" exitCode=0 Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.896350 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2"} Jan 23 14:33:00 crc kubenswrapper[4771]: I0123 14:33:00.896739 4771 scope.go:117] "RemoveContainer" containerID="b3743d936d5339e4125e66f8ff38fe8721ffddf59164ea95b5547e2e38a32686" Jan 23 14:33:00 crc kubenswrapper[4771]: E0123 14:33:00.941351 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:33:01 crc kubenswrapper[4771]: I0123 14:33:01.912097 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:33:01 crc kubenswrapper[4771]: E0123 14:33:01.912854 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:33:14 crc kubenswrapper[4771]: I0123 14:33:14.229090 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:33:14 crc kubenswrapper[4771]: E0123 14:33:14.230309 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:33:29 crc kubenswrapper[4771]: I0123 14:33:29.236385 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:33:29 crc kubenswrapper[4771]: E0123 14:33:29.237502 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:33:42 crc kubenswrapper[4771]: I0123 14:33:42.229682 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:33:42 crc kubenswrapper[4771]: E0123 14:33:42.230588 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:33:57 crc kubenswrapper[4771]: I0123 14:33:57.228270 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:33:57 crc kubenswrapper[4771]: E0123 14:33:57.230603 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:34:11 crc kubenswrapper[4771]: I0123 14:34:11.228095 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:34:11 crc kubenswrapper[4771]: E0123 14:34:11.229069 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:34:23 crc kubenswrapper[4771]: I0123 14:34:23.228881 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:34:23 crc kubenswrapper[4771]: E0123 14:34:23.230105 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:34:34 crc kubenswrapper[4771]: I0123 14:34:34.229092 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:34:34 crc kubenswrapper[4771]: E0123 14:34:34.230144 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:34:45 crc kubenswrapper[4771]: I0123 14:34:45.229849 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:34:45 crc kubenswrapper[4771]: E0123 14:34:45.230967 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:34:58 crc kubenswrapper[4771]: I0123 14:34:58.229035 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:34:58 crc kubenswrapper[4771]: E0123 14:34:58.230011 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:35:12 crc kubenswrapper[4771]: I0123 14:35:12.229303 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:35:12 crc kubenswrapper[4771]: E0123 14:35:12.230494 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:35:25 crc kubenswrapper[4771]: I0123 14:35:25.229318 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:35:25 crc kubenswrapper[4771]: E0123 14:35:25.230425 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:35:39 crc kubenswrapper[4771]: I0123 14:35:39.236102 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:35:39 crc kubenswrapper[4771]: E0123 14:35:39.237235 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:35:52 crc kubenswrapper[4771]: I0123 14:35:52.228209 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:35:52 crc kubenswrapper[4771]: E0123 14:35:52.229182 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:36:05 crc kubenswrapper[4771]: I0123 14:36:05.231959 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:36:05 crc kubenswrapper[4771]: E0123 14:36:05.233034 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:36:16 crc kubenswrapper[4771]: I0123 14:36:16.228285 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:36:16 crc kubenswrapper[4771]: E0123 14:36:16.229458 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:36:30 crc kubenswrapper[4771]: I0123 14:36:30.229222 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:36:30 crc kubenswrapper[4771]: E0123 14:36:30.231628 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:36:44 crc kubenswrapper[4771]: I0123 14:36:44.229117 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:36:44 crc kubenswrapper[4771]: E0123 14:36:44.231646 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:36:56 crc kubenswrapper[4771]: I0123 14:36:56.228331 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:36:56 crc kubenswrapper[4771]: E0123 14:36:56.229452 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:37:07 crc kubenswrapper[4771]: I0123 14:37:07.227980 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:37:07 crc kubenswrapper[4771]: E0123 14:37:07.229104 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:37:22 crc kubenswrapper[4771]: I0123 14:37:22.227937 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:37:22 crc kubenswrapper[4771]: E0123 14:37:22.228782 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.914388 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.915839 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="extract-utilities" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.915862 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="extract-utilities" Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.915882 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.915890 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.915907 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="extract-content" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.915915 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="extract-content" Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.915944 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="extract-utilities" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.915955 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="extract-utilities" Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.915973 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.915984 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: E0123 14:37:30.916012 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="extract-content" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.916019 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="extract-content" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.916296 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6952158-945c-4d27-9cf4-a4c0d28b4127" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.916330 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f48e04c-f8e3-4bc9-93d6-28ffd59d9528" containerName="registry-server" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.918097 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:30 crc kubenswrapper[4771]: I0123 14:37:30.933767 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.066072 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.066300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.066897 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfx48\" (UniqueName: \"kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.169314 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfx48\" (UniqueName: \"kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.169445 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.169494 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.170212 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.170218 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.191300 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfx48\" (UniqueName: \"kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48\") pod \"certified-operators-7t7kg\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.242491 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.774180 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:31 crc kubenswrapper[4771]: I0123 14:37:31.879651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerStarted","Data":"cac10b03a7f7e00499b31ee3f464b06f6072fbce5ac3cc438524d0a5bd3a2655"} Jan 23 14:37:32 crc kubenswrapper[4771]: I0123 14:37:32.891797 4771 generic.go:334] "Generic (PLEG): container finished" podID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerID="61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb" exitCode=0 Jan 23 14:37:32 crc kubenswrapper[4771]: I0123 14:37:32.891848 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerDied","Data":"61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb"} Jan 23 14:37:32 crc kubenswrapper[4771]: I0123 14:37:32.895057 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:37:33 crc kubenswrapper[4771]: I0123 14:37:33.903974 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerStarted","Data":"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c"} Jan 23 14:37:34 crc kubenswrapper[4771]: I0123 14:37:34.916727 4771 generic.go:334] "Generic (PLEG): container finished" podID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerID="4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c" exitCode=0 Jan 23 14:37:34 crc kubenswrapper[4771]: I0123 14:37:34.916869 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerDied","Data":"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c"} Jan 23 14:37:35 crc kubenswrapper[4771]: I0123 14:37:35.933216 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerStarted","Data":"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77"} Jan 23 14:37:35 crc kubenswrapper[4771]: I0123 14:37:35.990866 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7t7kg" podStartSLOduration=3.411008348 podStartE2EDuration="5.990839348s" podCreationTimestamp="2026-01-23 14:37:30 +0000 UTC" firstStartedPulling="2026-01-23 14:37:32.894745514 +0000 UTC m=+3893.917283139" lastFinishedPulling="2026-01-23 14:37:35.474576514 +0000 UTC m=+3896.497114139" observedRunningTime="2026-01-23 14:37:35.987506033 +0000 UTC m=+3897.010043678" watchObservedRunningTime="2026-01-23 14:37:35.990839348 +0000 UTC m=+3897.013376973" Jan 23 14:37:36 crc kubenswrapper[4771]: I0123 14:37:36.228494 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:37:36 crc kubenswrapper[4771]: E0123 14:37:36.228876 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:37:41 crc kubenswrapper[4771]: I0123 14:37:41.242892 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:41 crc kubenswrapper[4771]: I0123 14:37:41.243789 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:41 crc kubenswrapper[4771]: I0123 14:37:41.303327 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:42 crc kubenswrapper[4771]: I0123 14:37:42.144775 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:42 crc kubenswrapper[4771]: I0123 14:37:42.205465 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.113378 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7t7kg" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="registry-server" containerID="cri-o://1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77" gracePeriod=2 Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.669532 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.870168 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfx48\" (UniqueName: \"kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48\") pod \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.870277 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities\") pod \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.870552 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content\") pod \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\" (UID: \"275a9cb4-e970-4698-b3b4-70abc2da8cc0\") " Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.871455 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities" (OuterVolumeSpecName: "utilities") pod "275a9cb4-e970-4698-b3b4-70abc2da8cc0" (UID: "275a9cb4-e970-4698-b3b4-70abc2da8cc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.881708 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48" (OuterVolumeSpecName: "kube-api-access-rfx48") pod "275a9cb4-e970-4698-b3b4-70abc2da8cc0" (UID: "275a9cb4-e970-4698-b3b4-70abc2da8cc0"). InnerVolumeSpecName "kube-api-access-rfx48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.926721 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "275a9cb4-e970-4698-b3b4-70abc2da8cc0" (UID: "275a9cb4-e970-4698-b3b4-70abc2da8cc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.974384 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfx48\" (UniqueName: \"kubernetes.io/projected/275a9cb4-e970-4698-b3b4-70abc2da8cc0-kube-api-access-rfx48\") on node \"crc\" DevicePath \"\"" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.974467 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:37:44 crc kubenswrapper[4771]: I0123 14:37:44.974483 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/275a9cb4-e970-4698-b3b4-70abc2da8cc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.128154 4771 generic.go:334] "Generic (PLEG): container finished" podID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerID="1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77" exitCode=0 Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.128228 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7t7kg" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.128248 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerDied","Data":"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77"} Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.129000 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7t7kg" event={"ID":"275a9cb4-e970-4698-b3b4-70abc2da8cc0","Type":"ContainerDied","Data":"cac10b03a7f7e00499b31ee3f464b06f6072fbce5ac3cc438524d0a5bd3a2655"} Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.129021 4771 scope.go:117] "RemoveContainer" containerID="1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.159648 4771 scope.go:117] "RemoveContainer" containerID="4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.172820 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.183792 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7t7kg"] Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.190003 4771 scope.go:117] "RemoveContainer" containerID="61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.246648 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" path="/var/lib/kubelet/pods/275a9cb4-e970-4698-b3b4-70abc2da8cc0/volumes" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.251504 4771 scope.go:117] "RemoveContainer" containerID="1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77" Jan 23 14:37:45 crc kubenswrapper[4771]: E0123 14:37:45.260772 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77\": container with ID starting with 1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77 not found: ID does not exist" containerID="1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.260855 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77"} err="failed to get container status \"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77\": rpc error: code = NotFound desc = could not find container \"1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77\": container with ID starting with 1356a01ab72731a67f56e9ea399785f394899b4f112c59476369e62be6d49b77 not found: ID does not exist" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.260904 4771 scope.go:117] "RemoveContainer" containerID="4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c" Jan 23 14:37:45 crc kubenswrapper[4771]: E0123 14:37:45.261845 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c\": container with ID starting with 4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c not found: ID does not exist" containerID="4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.261912 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c"} err="failed to get container status \"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c\": rpc error: code = NotFound desc = could not find container \"4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c\": container with ID starting with 4a860f79870d463abbbbbadd48214ca229489946540c4ee5573a7998f9733b9c not found: ID does not exist" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.261937 4771 scope.go:117] "RemoveContainer" containerID="61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb" Jan 23 14:37:45 crc kubenswrapper[4771]: E0123 14:37:45.262514 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb\": container with ID starting with 61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb not found: ID does not exist" containerID="61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb" Jan 23 14:37:45 crc kubenswrapper[4771]: I0123 14:37:45.262548 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb"} err="failed to get container status \"61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb\": rpc error: code = NotFound desc = could not find container \"61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb\": container with ID starting with 61fade43656a17ece3bcb96d2cd9f24beae52fbd4f9c718c4ff57b5962c6c4fb not found: ID does not exist" Jan 23 14:37:49 crc kubenswrapper[4771]: I0123 14:37:49.237915 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:37:49 crc kubenswrapper[4771]: E0123 14:37:49.239958 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:38:04 crc kubenswrapper[4771]: I0123 14:38:04.228614 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:38:05 crc kubenswrapper[4771]: I0123 14:38:05.362441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13"} Jan 23 14:40:26 crc kubenswrapper[4771]: I0123 14:40:26.767715 4771 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="90863ead-98c1-4258-b980-919471f6d76c" containerName="galera" probeResult="failure" output="command timed out" Jan 23 14:40:26 crc kubenswrapper[4771]: I0123 14:40:26.767803 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="90863ead-98c1-4258-b980-919471f6d76c" containerName="galera" probeResult="failure" output="command timed out" Jan 23 14:40:30 crc kubenswrapper[4771]: I0123 14:40:30.312045 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:40:30 crc kubenswrapper[4771]: I0123 14:40:30.312713 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:41:00 crc kubenswrapper[4771]: I0123 14:41:00.312437 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:41:00 crc kubenswrapper[4771]: I0123 14:41:00.313164 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.311917 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.313008 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.313091 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.314553 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.314678 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13" gracePeriod=600 Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.712847 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13" exitCode=0 Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.712909 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13"} Jan 23 14:41:30 crc kubenswrapper[4771]: I0123 14:41:30.712957 4771 scope.go:117] "RemoveContainer" containerID="80d7d884408696914c49ab1ff6c641e62ac15564ea20ecb44d0671487cb055e2" Jan 23 14:41:32 crc kubenswrapper[4771]: I0123 14:41:32.744343 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564"} Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.359871 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:23 crc kubenswrapper[4771]: E0123 14:42:23.361199 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="registry-server" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.361215 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="registry-server" Jan 23 14:42:23 crc kubenswrapper[4771]: E0123 14:42:23.361254 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="extract-content" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.361260 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="extract-content" Jan 23 14:42:23 crc kubenswrapper[4771]: E0123 14:42:23.361277 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="extract-utilities" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.361284 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="extract-utilities" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.361588 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="275a9cb4-e970-4698-b3b4-70abc2da8cc0" containerName="registry-server" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.363667 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.405546 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.416178 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcb6\" (UniqueName: \"kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.416269 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.416353 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.519233 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcb6\" (UniqueName: \"kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.519375 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.519469 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.520090 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:23 crc kubenswrapper[4771]: I0123 14:42:23.520135 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:24 crc kubenswrapper[4771]: I0123 14:42:24.068589 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcb6\" (UniqueName: \"kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6\") pod \"redhat-operators-gw9ht\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:24 crc kubenswrapper[4771]: I0123 14:42:24.316523 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:24 crc kubenswrapper[4771]: I0123 14:42:24.875484 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:25 crc kubenswrapper[4771]: I0123 14:42:25.467913 4771 generic.go:334] "Generic (PLEG): container finished" podID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerID="832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141" exitCode=0 Jan 23 14:42:25 crc kubenswrapper[4771]: I0123 14:42:25.467976 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerDied","Data":"832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141"} Jan 23 14:42:25 crc kubenswrapper[4771]: I0123 14:42:25.468307 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerStarted","Data":"865015c4744b7b27167e67d3dcc4afbd52c713f3c719d11587a92d95cf7db63c"} Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.490162 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerStarted","Data":"3e5d04f5baef898899556557058dc8de3029ea09dfda734a0975e8c88b765442"} Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.739181 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.742425 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.758828 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.937113 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdsl\" (UniqueName: \"kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.937222 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:27 crc kubenswrapper[4771]: I0123 14:42:27.937300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.040771 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdsl\" (UniqueName: \"kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.040903 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.040961 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.041584 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.041654 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.078068 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdsl\" (UniqueName: \"kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl\") pod \"redhat-marketplace-w5hz8\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.375228 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:28 crc kubenswrapper[4771]: I0123 14:42:28.978121 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:28 crc kubenswrapper[4771]: W0123 14:42:28.981665 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6402476a_337e_4bab_9c83_865820879f97.slice/crio-9f283936b3caf68eb81e736f3d988003538582943bdfa4187916e80c902a3fd0 WatchSource:0}: Error finding container 9f283936b3caf68eb81e736f3d988003538582943bdfa4187916e80c902a3fd0: Status 404 returned error can't find the container with id 9f283936b3caf68eb81e736f3d988003538582943bdfa4187916e80c902a3fd0 Jan 23 14:42:29 crc kubenswrapper[4771]: I0123 14:42:29.546926 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerStarted","Data":"b17d0fa180dbdb049d7de9e7f6cca867a87d085356818005ddda9ad85466e3a2"} Jan 23 14:42:29 crc kubenswrapper[4771]: I0123 14:42:29.547369 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerStarted","Data":"9f283936b3caf68eb81e736f3d988003538582943bdfa4187916e80c902a3fd0"} Jan 23 14:42:30 crc kubenswrapper[4771]: I0123 14:42:30.561108 4771 generic.go:334] "Generic (PLEG): container finished" podID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerID="3e5d04f5baef898899556557058dc8de3029ea09dfda734a0975e8c88b765442" exitCode=0 Jan 23 14:42:30 crc kubenswrapper[4771]: I0123 14:42:30.561212 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerDied","Data":"3e5d04f5baef898899556557058dc8de3029ea09dfda734a0975e8c88b765442"} Jan 23 14:42:30 crc kubenswrapper[4771]: I0123 14:42:30.563562 4771 generic.go:334] "Generic (PLEG): container finished" podID="6402476a-337e-4bab-9c83-865820879f97" containerID="b17d0fa180dbdb049d7de9e7f6cca867a87d085356818005ddda9ad85466e3a2" exitCode=0 Jan 23 14:42:30 crc kubenswrapper[4771]: I0123 14:42:30.563611 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerDied","Data":"b17d0fa180dbdb049d7de9e7f6cca867a87d085356818005ddda9ad85466e3a2"} Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.589347 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerStarted","Data":"c3f0c7ad2abfe58b2b57eb7542d85e2501f87410902b02f26433a0eb9f9bf5ce"} Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.593018 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerStarted","Data":"fc85fd1b7c661b91e607f8bd4231c33ad0156349fa25511ce98167400dd1601c"} Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.641260 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gw9ht" podStartSLOduration=3.137516162 podStartE2EDuration="8.641236202s" podCreationTimestamp="2026-01-23 14:42:23 +0000 UTC" firstStartedPulling="2026-01-23 14:42:25.469857476 +0000 UTC m=+4186.492395111" lastFinishedPulling="2026-01-23 14:42:30.973577526 +0000 UTC m=+4191.996115151" observedRunningTime="2026-01-23 14:42:31.637306158 +0000 UTC m=+4192.659843793" watchObservedRunningTime="2026-01-23 14:42:31.641236202 +0000 UTC m=+4192.663773827" Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.930531 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.938705 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.963040 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.974875 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpmls\" (UniqueName: \"kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.974992 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:31 crc kubenswrapper[4771]: I0123 14:42:31.975013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.077913 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpmls\" (UniqueName: \"kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.077987 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.078010 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.078702 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.078809 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.105499 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpmls\" (UniqueName: \"kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls\") pod \"community-operators-s57cc\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.276226 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.627471 4771 generic.go:334] "Generic (PLEG): container finished" podID="6402476a-337e-4bab-9c83-865820879f97" containerID="c3f0c7ad2abfe58b2b57eb7542d85e2501f87410902b02f26433a0eb9f9bf5ce" exitCode=0 Jan 23 14:42:32 crc kubenswrapper[4771]: I0123 14:42:32.627899 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerDied","Data":"c3f0c7ad2abfe58b2b57eb7542d85e2501f87410902b02f26433a0eb9f9bf5ce"} Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.003137 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:33 crc kubenswrapper[4771]: W0123 14:42:33.003821 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4ae536f_a8e5_49f1_add9_8d45a087081a.slice/crio-6133b3e6aa7eefc102e500875540e987ab7f1ad1d360653f92a4eed2fde5e4f9 WatchSource:0}: Error finding container 6133b3e6aa7eefc102e500875540e987ab7f1ad1d360653f92a4eed2fde5e4f9: Status 404 returned error can't find the container with id 6133b3e6aa7eefc102e500875540e987ab7f1ad1d360653f92a4eed2fde5e4f9 Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.645472 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerStarted","Data":"bbd64c873e5329e60b9e58c599c3670039e157fbd500d1078073996c020eb5db"} Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.647711 4771 generic.go:334] "Generic (PLEG): container finished" podID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerID="493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964" exitCode=0 Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.647804 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerDied","Data":"493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964"} Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.647857 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerStarted","Data":"6133b3e6aa7eefc102e500875540e987ab7f1ad1d360653f92a4eed2fde5e4f9"} Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.649779 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:42:33 crc kubenswrapper[4771]: I0123 14:42:33.670591 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5hz8" podStartSLOduration=4.017260014 podStartE2EDuration="6.670565466s" podCreationTimestamp="2026-01-23 14:42:27 +0000 UTC" firstStartedPulling="2026-01-23 14:42:30.564669726 +0000 UTC m=+4191.587207351" lastFinishedPulling="2026-01-23 14:42:33.217975178 +0000 UTC m=+4194.240512803" observedRunningTime="2026-01-23 14:42:33.664483305 +0000 UTC m=+4194.687020940" watchObservedRunningTime="2026-01-23 14:42:33.670565466 +0000 UTC m=+4194.693103091" Jan 23 14:42:34 crc kubenswrapper[4771]: I0123 14:42:34.317372 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:34 crc kubenswrapper[4771]: I0123 14:42:34.317445 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:35 crc kubenswrapper[4771]: I0123 14:42:35.518475 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gw9ht" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" probeResult="failure" output=< Jan 23 14:42:35 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:42:35 crc kubenswrapper[4771]: > Jan 23 14:42:35 crc kubenswrapper[4771]: I0123 14:42:35.673631 4771 generic.go:334] "Generic (PLEG): container finished" podID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerID="0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b" exitCode=0 Jan 23 14:42:35 crc kubenswrapper[4771]: I0123 14:42:35.673689 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerDied","Data":"0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b"} Jan 23 14:42:37 crc kubenswrapper[4771]: I0123 14:42:37.697651 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerStarted","Data":"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1"} Jan 23 14:42:37 crc kubenswrapper[4771]: I0123 14:42:37.729848 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s57cc" podStartSLOduration=4.101707091 podStartE2EDuration="6.729815073s" podCreationTimestamp="2026-01-23 14:42:31 +0000 UTC" firstStartedPulling="2026-01-23 14:42:33.649492457 +0000 UTC m=+4194.672030072" lastFinishedPulling="2026-01-23 14:42:36.277600429 +0000 UTC m=+4197.300138054" observedRunningTime="2026-01-23 14:42:37.71853238 +0000 UTC m=+4198.741070025" watchObservedRunningTime="2026-01-23 14:42:37.729815073 +0000 UTC m=+4198.752352698" Jan 23 14:42:38 crc kubenswrapper[4771]: I0123 14:42:38.375888 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:38 crc kubenswrapper[4771]: I0123 14:42:38.376462 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:38 crc kubenswrapper[4771]: I0123 14:42:38.429455 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:38 crc kubenswrapper[4771]: I0123 14:42:38.772866 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:39 crc kubenswrapper[4771]: I0123 14:42:39.114088 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:40 crc kubenswrapper[4771]: I0123 14:42:40.728165 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5hz8" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="registry-server" containerID="cri-o://bbd64c873e5329e60b9e58c599c3670039e157fbd500d1078073996c020eb5db" gracePeriod=2 Jan 23 14:42:41 crc kubenswrapper[4771]: I0123 14:42:41.743155 4771 generic.go:334] "Generic (PLEG): container finished" podID="6402476a-337e-4bab-9c83-865820879f97" containerID="bbd64c873e5329e60b9e58c599c3670039e157fbd500d1078073996c020eb5db" exitCode=0 Jan 23 14:42:41 crc kubenswrapper[4771]: I0123 14:42:41.743229 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerDied","Data":"bbd64c873e5329e60b9e58c599c3670039e157fbd500d1078073996c020eb5db"} Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.276639 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.276732 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.351136 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.480935 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.565721 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content\") pod \"6402476a-337e-4bab-9c83-865820879f97\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.566074 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities\") pod \"6402476a-337e-4bab-9c83-865820879f97\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.566177 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdsl\" (UniqueName: \"kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl\") pod \"6402476a-337e-4bab-9c83-865820879f97\" (UID: \"6402476a-337e-4bab-9c83-865820879f97\") " Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.566559 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities" (OuterVolumeSpecName: "utilities") pod "6402476a-337e-4bab-9c83-865820879f97" (UID: "6402476a-337e-4bab-9c83-865820879f97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.567455 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.573077 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl" (OuterVolumeSpecName: "kube-api-access-9qdsl") pod "6402476a-337e-4bab-9c83-865820879f97" (UID: "6402476a-337e-4bab-9c83-865820879f97"). InnerVolumeSpecName "kube-api-access-9qdsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.592856 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6402476a-337e-4bab-9c83-865820879f97" (UID: "6402476a-337e-4bab-9c83-865820879f97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.669365 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6402476a-337e-4bab-9c83-865820879f97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.669423 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qdsl\" (UniqueName: \"kubernetes.io/projected/6402476a-337e-4bab-9c83-865820879f97-kube-api-access-9qdsl\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.758072 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5hz8" event={"ID":"6402476a-337e-4bab-9c83-865820879f97","Type":"ContainerDied","Data":"9f283936b3caf68eb81e736f3d988003538582943bdfa4187916e80c902a3fd0"} Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.758111 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5hz8" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.758154 4771 scope.go:117] "RemoveContainer" containerID="bbd64c873e5329e60b9e58c599c3670039e157fbd500d1078073996c020eb5db" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.810359 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.811466 4771 scope.go:117] "RemoveContainer" containerID="c3f0c7ad2abfe58b2b57eb7542d85e2501f87410902b02f26433a0eb9f9bf5ce" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.820121 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5hz8"] Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.835142 4771 scope.go:117] "RemoveContainer" containerID="b17d0fa180dbdb049d7de9e7f6cca867a87d085356818005ddda9ad85466e3a2" Jan 23 14:42:42 crc kubenswrapper[4771]: I0123 14:42:42.846617 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:43 crc kubenswrapper[4771]: I0123 14:42:43.243257 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402476a-337e-4bab-9c83-865820879f97" path="/var/lib/kubelet/pods/6402476a-337e-4bab-9c83-865820879f97/volumes" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.113511 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.113808 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s57cc" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="registry-server" containerID="cri-o://183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1" gracePeriod=2 Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.369321 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gw9ht" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" probeResult="failure" output=< Jan 23 14:42:45 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:42:45 crc kubenswrapper[4771]: > Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.657495 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.772934 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpmls\" (UniqueName: \"kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls\") pod \"c4ae536f-a8e5-49f1-add9-8d45a087081a\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.773606 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities\") pod \"c4ae536f-a8e5-49f1-add9-8d45a087081a\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.773769 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content\") pod \"c4ae536f-a8e5-49f1-add9-8d45a087081a\" (UID: \"c4ae536f-a8e5-49f1-add9-8d45a087081a\") " Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.774829 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities" (OuterVolumeSpecName: "utilities") pod "c4ae536f-a8e5-49f1-add9-8d45a087081a" (UID: "c4ae536f-a8e5-49f1-add9-8d45a087081a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.779960 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls" (OuterVolumeSpecName: "kube-api-access-lpmls") pod "c4ae536f-a8e5-49f1-add9-8d45a087081a" (UID: "c4ae536f-a8e5-49f1-add9-8d45a087081a"). InnerVolumeSpecName "kube-api-access-lpmls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.801517 4771 generic.go:334] "Generic (PLEG): container finished" podID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerID="183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1" exitCode=0 Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.801586 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerDied","Data":"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1"} Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.801635 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s57cc" event={"ID":"c4ae536f-a8e5-49f1-add9-8d45a087081a","Type":"ContainerDied","Data":"6133b3e6aa7eefc102e500875540e987ab7f1ad1d360653f92a4eed2fde5e4f9"} Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.801687 4771 scope.go:117] "RemoveContainer" containerID="183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.801954 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s57cc" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.851575 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4ae536f-a8e5-49f1-add9-8d45a087081a" (UID: "c4ae536f-a8e5-49f1-add9-8d45a087081a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.872829 4771 scope.go:117] "RemoveContainer" containerID="0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.877477 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpmls\" (UniqueName: \"kubernetes.io/projected/c4ae536f-a8e5-49f1-add9-8d45a087081a-kube-api-access-lpmls\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.877519 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.877534 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ae536f-a8e5-49f1-add9-8d45a087081a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.899918 4771 scope.go:117] "RemoveContainer" containerID="493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.960354 4771 scope.go:117] "RemoveContainer" containerID="183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1" Jan 23 14:42:45 crc kubenswrapper[4771]: E0123 14:42:45.960892 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1\": container with ID starting with 183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1 not found: ID does not exist" containerID="183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.961195 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1"} err="failed to get container status \"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1\": rpc error: code = NotFound desc = could not find container \"183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1\": container with ID starting with 183e307e23c2d9cef3a7cded45e61cbc33ae803955ad29304e65be653b673de1 not found: ID does not exist" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.961320 4771 scope.go:117] "RemoveContainer" containerID="0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b" Jan 23 14:42:45 crc kubenswrapper[4771]: E0123 14:42:45.961769 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b\": container with ID starting with 0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b not found: ID does not exist" containerID="0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.961839 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b"} err="failed to get container status \"0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b\": rpc error: code = NotFound desc = could not find container \"0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b\": container with ID starting with 0b8fcfbc5576591987998f0d33c5bbfa61d657647bb5d44fe5ea14c831ef958b not found: ID does not exist" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.961883 4771 scope.go:117] "RemoveContainer" containerID="493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964" Jan 23 14:42:45 crc kubenswrapper[4771]: E0123 14:42:45.962258 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964\": container with ID starting with 493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964 not found: ID does not exist" containerID="493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964" Jan 23 14:42:45 crc kubenswrapper[4771]: I0123 14:42:45.962290 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964"} err="failed to get container status \"493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964\": rpc error: code = NotFound desc = could not find container \"493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964\": container with ID starting with 493031d44947aae3e473d51a2b8347fb00ffd39fb605d3f802f0be2731c82964 not found: ID does not exist" Jan 23 14:42:46 crc kubenswrapper[4771]: I0123 14:42:46.147423 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:46 crc kubenswrapper[4771]: I0123 14:42:46.162267 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s57cc"] Jan 23 14:42:47 crc kubenswrapper[4771]: I0123 14:42:47.240507 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" path="/var/lib/kubelet/pods/c4ae536f-a8e5-49f1-add9-8d45a087081a/volumes" Jan 23 14:42:54 crc kubenswrapper[4771]: I0123 14:42:54.393822 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:54 crc kubenswrapper[4771]: I0123 14:42:54.470221 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:55 crc kubenswrapper[4771]: I0123 14:42:55.965831 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:55 crc kubenswrapper[4771]: I0123 14:42:55.966521 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gw9ht" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" containerID="cri-o://fc85fd1b7c661b91e607f8bd4231c33ad0156349fa25511ce98167400dd1601c" gracePeriod=2 Jan 23 14:42:56 crc kubenswrapper[4771]: I0123 14:42:56.947259 4771 generic.go:334] "Generic (PLEG): container finished" podID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerID="fc85fd1b7c661b91e607f8bd4231c33ad0156349fa25511ce98167400dd1601c" exitCode=0 Jan 23 14:42:56 crc kubenswrapper[4771]: I0123 14:42:56.947349 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerDied","Data":"fc85fd1b7c661b91e607f8bd4231c33ad0156349fa25511ce98167400dd1601c"} Jan 23 14:42:56 crc kubenswrapper[4771]: I0123 14:42:56.947809 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gw9ht" event={"ID":"cbdd7d57-3b08-43e4-9443-b7514acee227","Type":"ContainerDied","Data":"865015c4744b7b27167e67d3dcc4afbd52c713f3c719d11587a92d95cf7db63c"} Jan 23 14:42:56 crc kubenswrapper[4771]: I0123 14:42:56.947833 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865015c4744b7b27167e67d3dcc4afbd52c713f3c719d11587a92d95cf7db63c" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.057941 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.177169 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content\") pod \"cbdd7d57-3b08-43e4-9443-b7514acee227\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.177284 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities\") pod \"cbdd7d57-3b08-43e4-9443-b7514acee227\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.177587 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjcb6\" (UniqueName: \"kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6\") pod \"cbdd7d57-3b08-43e4-9443-b7514acee227\" (UID: \"cbdd7d57-3b08-43e4-9443-b7514acee227\") " Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.178615 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities" (OuterVolumeSpecName: "utilities") pod "cbdd7d57-3b08-43e4-9443-b7514acee227" (UID: "cbdd7d57-3b08-43e4-9443-b7514acee227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.193018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6" (OuterVolumeSpecName: "kube-api-access-mjcb6") pod "cbdd7d57-3b08-43e4-9443-b7514acee227" (UID: "cbdd7d57-3b08-43e4-9443-b7514acee227"). InnerVolumeSpecName "kube-api-access-mjcb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.280860 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjcb6\" (UniqueName: \"kubernetes.io/projected/cbdd7d57-3b08-43e4-9443-b7514acee227-kube-api-access-mjcb6\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.280904 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.326702 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbdd7d57-3b08-43e4-9443-b7514acee227" (UID: "cbdd7d57-3b08-43e4-9443-b7514acee227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.384005 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdd7d57-3b08-43e4-9443-b7514acee227-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.957962 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gw9ht" Jan 23 14:42:57 crc kubenswrapper[4771]: I0123 14:42:57.996541 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:58 crc kubenswrapper[4771]: I0123 14:42:58.007282 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gw9ht"] Jan 23 14:42:59 crc kubenswrapper[4771]: I0123 14:42:59.243855 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" path="/var/lib/kubelet/pods/cbdd7d57-3b08-43e4-9443-b7514acee227/volumes" Jan 23 14:44:00 crc kubenswrapper[4771]: I0123 14:44:00.312469 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:44:00 crc kubenswrapper[4771]: I0123 14:44:00.313212 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:44:30 crc kubenswrapper[4771]: I0123 14:44:30.312314 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:44:30 crc kubenswrapper[4771]: I0123 14:44:30.313097 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.228706 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn"] Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230173 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230199 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230238 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230248 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230270 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230284 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230304 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230314 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230326 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230333 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230343 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230351 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230372 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230381 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230403 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230439 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="extract-utilities" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.230464 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230472 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="extract-content" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230770 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6402476a-337e-4bab-9c83-865820879f97" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230801 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdd7d57-3b08-43e4-9443-b7514acee227" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.230863 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ae536f-a8e5-49f1-add9-8d45a087081a" containerName="registry-server" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.233486 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.238928 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.239248 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.244400 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn"] Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.312309 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.312396 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.312595 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.313792 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.313865 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" gracePeriod=600 Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.319954 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.320163 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.320190 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp4k9\" (UniqueName: \"kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.422255 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.422828 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.422874 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp4k9\" (UniqueName: \"kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: I0123 14:45:00.423561 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:00 crc kubenswrapper[4771]: E0123 14:45:00.447975 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8e44e1_6639_45d3_927f_347dc88e96c6.slice/crio-79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8e44e1_6639_45d3_927f_347dc88e96c6.slice/crio-conmon-79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.065454 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.066447 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp4k9\" (UniqueName: \"kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9\") pod \"collect-profiles-29486325-vb2pn\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:01 crc kubenswrapper[4771]: E0123 14:45:01.072814 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.180539 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.312094 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" exitCode=0 Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.312155 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564"} Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.312204 4771 scope.go:117] "RemoveContainer" containerID="ca63fdb5b452c327c4a8fb076acc25568ffefad598db0bca1a36d08ab836aa13" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.313908 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:45:01 crc kubenswrapper[4771]: E0123 14:45:01.314271 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:45:01 crc kubenswrapper[4771]: I0123 14:45:01.725253 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn"] Jan 23 14:45:02 crc kubenswrapper[4771]: I0123 14:45:02.323440 4771 generic.go:334] "Generic (PLEG): container finished" podID="b915df75-b9fb-44ce-8604-e65d8990cf26" containerID="d3fa10be9f000948bebeec0221182dacc167eee2793765068bb3edc21db9d338" exitCode=0 Jan 23 14:45:02 crc kubenswrapper[4771]: I0123 14:45:02.323634 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" event={"ID":"b915df75-b9fb-44ce-8604-e65d8990cf26","Type":"ContainerDied","Data":"d3fa10be9f000948bebeec0221182dacc167eee2793765068bb3edc21db9d338"} Jan 23 14:45:02 crc kubenswrapper[4771]: I0123 14:45:02.323874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" event={"ID":"b915df75-b9fb-44ce-8604-e65d8990cf26","Type":"ContainerStarted","Data":"104db235a1c804df446ff2b8af47a1e4de2d996b603975411f77030925e0aff8"} Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.812153 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.936440 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume\") pod \"b915df75-b9fb-44ce-8604-e65d8990cf26\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.936505 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp4k9\" (UniqueName: \"kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9\") pod \"b915df75-b9fb-44ce-8604-e65d8990cf26\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.936756 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume\") pod \"b915df75-b9fb-44ce-8604-e65d8990cf26\" (UID: \"b915df75-b9fb-44ce-8604-e65d8990cf26\") " Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.937387 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume" (OuterVolumeSpecName: "config-volume") pod "b915df75-b9fb-44ce-8604-e65d8990cf26" (UID: "b915df75-b9fb-44ce-8604-e65d8990cf26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.937566 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b915df75-b9fb-44ce-8604-e65d8990cf26-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.943108 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9" (OuterVolumeSpecName: "kube-api-access-hp4k9") pod "b915df75-b9fb-44ce-8604-e65d8990cf26" (UID: "b915df75-b9fb-44ce-8604-e65d8990cf26"). InnerVolumeSpecName "kube-api-access-hp4k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:03 crc kubenswrapper[4771]: I0123 14:45:03.945096 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b915df75-b9fb-44ce-8604-e65d8990cf26" (UID: "b915df75-b9fb-44ce-8604-e65d8990cf26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.040388 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b915df75-b9fb-44ce-8604-e65d8990cf26-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.040496 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp4k9\" (UniqueName: \"kubernetes.io/projected/b915df75-b9fb-44ce-8604-e65d8990cf26-kube-api-access-hp4k9\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.353024 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" event={"ID":"b915df75-b9fb-44ce-8604-e65d8990cf26","Type":"ContainerDied","Data":"104db235a1c804df446ff2b8af47a1e4de2d996b603975411f77030925e0aff8"} Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.353072 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="104db235a1c804df446ff2b8af47a1e4de2d996b603975411f77030925e0aff8" Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.353159 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn" Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.906629 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn"] Jan 23 14:45:04 crc kubenswrapper[4771]: I0123 14:45:04.919017 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-q46pn"] Jan 23 14:45:05 crc kubenswrapper[4771]: I0123 14:45:05.241055 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1ca2c3-4bbe-4b25-a648-538a05e742cd" path="/var/lib/kubelet/pods/cf1ca2c3-4bbe-4b25-a648-538a05e742cd/volumes" Jan 23 14:45:13 crc kubenswrapper[4771]: I0123 14:45:13.228342 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:45:13 crc kubenswrapper[4771]: E0123 14:45:13.229343 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:45:24 crc kubenswrapper[4771]: I0123 14:45:24.228784 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:45:24 crc kubenswrapper[4771]: E0123 14:45:24.229963 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:45:36 crc kubenswrapper[4771]: I0123 14:45:36.228714 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:45:36 crc kubenswrapper[4771]: E0123 14:45:36.229671 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:45:49 crc kubenswrapper[4771]: I0123 14:45:49.237718 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:45:49 crc kubenswrapper[4771]: E0123 14:45:49.238881 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:46:00 crc kubenswrapper[4771]: I0123 14:46:00.228547 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:46:00 crc kubenswrapper[4771]: E0123 14:46:00.229489 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:46:00 crc kubenswrapper[4771]: I0123 14:46:00.745304 4771 scope.go:117] "RemoveContainer" containerID="49a7f7d9b91dc929a1ead7f1fb924d020cb58649d231ac5dccb357676bbc3c47" Jan 23 14:46:15 crc kubenswrapper[4771]: I0123 14:46:15.229053 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:46:15 crc kubenswrapper[4771]: E0123 14:46:15.230003 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:46:29 crc kubenswrapper[4771]: I0123 14:46:29.236515 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:46:29 crc kubenswrapper[4771]: E0123 14:46:29.237533 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:46:43 crc kubenswrapper[4771]: I0123 14:46:43.228792 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:46:43 crc kubenswrapper[4771]: E0123 14:46:43.230082 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:46:55 crc kubenswrapper[4771]: I0123 14:46:55.228947 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:46:55 crc kubenswrapper[4771]: E0123 14:46:55.230153 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:47:06 crc kubenswrapper[4771]: I0123 14:47:06.228881 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:47:06 crc kubenswrapper[4771]: E0123 14:47:06.229902 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:47:21 crc kubenswrapper[4771]: I0123 14:47:21.229236 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:47:21 crc kubenswrapper[4771]: E0123 14:47:21.230236 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:47:32 crc kubenswrapper[4771]: I0123 14:47:32.229051 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:47:32 crc kubenswrapper[4771]: E0123 14:47:32.230281 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:47:44 crc kubenswrapper[4771]: I0123 14:47:44.229463 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:47:44 crc kubenswrapper[4771]: E0123 14:47:44.230514 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:47:57 crc kubenswrapper[4771]: I0123 14:47:57.229364 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:47:57 crc kubenswrapper[4771]: E0123 14:47:57.230391 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:48:10 crc kubenswrapper[4771]: I0123 14:48:10.228403 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:48:10 crc kubenswrapper[4771]: E0123 14:48:10.229583 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:48:23 crc kubenswrapper[4771]: I0123 14:48:23.229117 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:48:23 crc kubenswrapper[4771]: E0123 14:48:23.230020 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:48:36 crc kubenswrapper[4771]: I0123 14:48:36.228357 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:48:36 crc kubenswrapper[4771]: E0123 14:48:36.229390 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:48:49 crc kubenswrapper[4771]: I0123 14:48:49.237935 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:48:49 crc kubenswrapper[4771]: E0123 14:48:49.238964 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.745653 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:48:58 crc kubenswrapper[4771]: E0123 14:48:58.746992 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b915df75-b9fb-44ce-8604-e65d8990cf26" containerName="collect-profiles" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.747011 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="b915df75-b9fb-44ce-8604-e65d8990cf26" containerName="collect-profiles" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.747238 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="b915df75-b9fb-44ce-8604-e65d8990cf26" containerName="collect-profiles" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.753190 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.766618 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.894057 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.894132 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.894264 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9cb\" (UniqueName: \"kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.997096 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.997169 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.997257 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9cb\" (UniqueName: \"kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.997833 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:58 crc kubenswrapper[4771]: I0123 14:48:58.997844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:59 crc kubenswrapper[4771]: I0123 14:48:59.025175 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9cb\" (UniqueName: \"kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb\") pod \"certified-operators-wqhds\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:59 crc kubenswrapper[4771]: I0123 14:48:59.090671 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:48:59 crc kubenswrapper[4771]: I0123 14:48:59.787392 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:48:59 crc kubenswrapper[4771]: I0123 14:48:59.929509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerStarted","Data":"423967df0d91918ca221faba48542bd543a7149d80b00b952c7112eaf7cab894"} Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.899013 4771 scope.go:117] "RemoveContainer" containerID="fc85fd1b7c661b91e607f8bd4231c33ad0156349fa25511ce98167400dd1601c" Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.924989 4771 scope.go:117] "RemoveContainer" containerID="832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141" Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.949354 4771 scope.go:117] "RemoveContainer" containerID="3e5d04f5baef898899556557058dc8de3029ea09dfda734a0975e8c88b765442" Jan 23 14:49:00 crc kubenswrapper[4771]: E0123 14:49:00.950797 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141\": container with ID starting with 832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141 not found: ID does not exist" containerID="832a8ac934a8a2d4738e0e772e089e8b58d391521aca1dbdda468400a2c0e141" Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.956471 4771 generic.go:334] "Generic (PLEG): container finished" podID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerID="b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09" exitCode=0 Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.956517 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerDied","Data":"b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09"} Jan 23 14:49:00 crc kubenswrapper[4771]: I0123 14:49:00.959344 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:49:01 crc kubenswrapper[4771]: I0123 14:49:01.969139 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerStarted","Data":"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b"} Jan 23 14:49:02 crc kubenswrapper[4771]: I0123 14:49:02.228046 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:49:02 crc kubenswrapper[4771]: E0123 14:49:02.228372 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:49:02 crc kubenswrapper[4771]: I0123 14:49:02.983040 4771 generic.go:334] "Generic (PLEG): container finished" podID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerID="1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b" exitCode=0 Jan 23 14:49:02 crc kubenswrapper[4771]: I0123 14:49:02.983147 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerDied","Data":"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b"} Jan 23 14:49:04 crc kubenswrapper[4771]: I0123 14:49:04.000822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerStarted","Data":"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7"} Jan 23 14:49:04 crc kubenswrapper[4771]: I0123 14:49:04.028518 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqhds" podStartSLOduration=3.586882097 podStartE2EDuration="6.028483753s" podCreationTimestamp="2026-01-23 14:48:58 +0000 UTC" firstStartedPulling="2026-01-23 14:49:00.959012036 +0000 UTC m=+4581.981549661" lastFinishedPulling="2026-01-23 14:49:03.400613692 +0000 UTC m=+4584.423151317" observedRunningTime="2026-01-23 14:49:04.023149325 +0000 UTC m=+4585.045686950" watchObservedRunningTime="2026-01-23 14:49:04.028483753 +0000 UTC m=+4585.051021378" Jan 23 14:49:09 crc kubenswrapper[4771]: I0123 14:49:09.091661 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:09 crc kubenswrapper[4771]: I0123 14:49:09.092522 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:09 crc kubenswrapper[4771]: I0123 14:49:09.157358 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:10 crc kubenswrapper[4771]: I0123 14:49:10.122681 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:10 crc kubenswrapper[4771]: I0123 14:49:10.188300 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.093876 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wqhds" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="registry-server" containerID="cri-o://cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7" gracePeriod=2 Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.710547 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.795443 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities\") pod \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.796049 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content\") pod \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.796189 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9cb\" (UniqueName: \"kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb\") pod \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\" (UID: \"09045d7b-6ef5-46ce-b7ab-f71e46048a1e\") " Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.796472 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities" (OuterVolumeSpecName: "utilities") pod "09045d7b-6ef5-46ce-b7ab-f71e46048a1e" (UID: "09045d7b-6ef5-46ce-b7ab-f71e46048a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.798988 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.807841 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb" (OuterVolumeSpecName: "kube-api-access-wk9cb") pod "09045d7b-6ef5-46ce-b7ab-f71e46048a1e" (UID: "09045d7b-6ef5-46ce-b7ab-f71e46048a1e"). InnerVolumeSpecName "kube-api-access-wk9cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.886895 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09045d7b-6ef5-46ce-b7ab-f71e46048a1e" (UID: "09045d7b-6ef5-46ce-b7ab-f71e46048a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.902786 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:12 crc kubenswrapper[4771]: I0123 14:49:12.902849 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9cb\" (UniqueName: \"kubernetes.io/projected/09045d7b-6ef5-46ce-b7ab-f71e46048a1e-kube-api-access-wk9cb\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.107432 4771 generic.go:334] "Generic (PLEG): container finished" podID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerID="cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7" exitCode=0 Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.107492 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerDied","Data":"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7"} Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.107536 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqhds" event={"ID":"09045d7b-6ef5-46ce-b7ab-f71e46048a1e","Type":"ContainerDied","Data":"423967df0d91918ca221faba48542bd543a7149d80b00b952c7112eaf7cab894"} Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.107542 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqhds" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.107586 4771 scope.go:117] "RemoveContainer" containerID="cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.150051 4771 scope.go:117] "RemoveContainer" containerID="1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.153850 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.165557 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wqhds"] Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.180087 4771 scope.go:117] "RemoveContainer" containerID="b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.234056 4771 scope.go:117] "RemoveContainer" containerID="cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7" Jan 23 14:49:13 crc kubenswrapper[4771]: E0123 14:49:13.234832 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7\": container with ID starting with cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7 not found: ID does not exist" containerID="cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.234926 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7"} err="failed to get container status \"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7\": rpc error: code = NotFound desc = could not find container \"cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7\": container with ID starting with cb85cce44c481d983d9e9cc083579eb2329e6147f5eef1d68181c6b11d42aba7 not found: ID does not exist" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.234959 4771 scope.go:117] "RemoveContainer" containerID="1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b" Jan 23 14:49:13 crc kubenswrapper[4771]: E0123 14:49:13.235327 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b\": container with ID starting with 1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b not found: ID does not exist" containerID="1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.235352 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b"} err="failed to get container status \"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b\": rpc error: code = NotFound desc = could not find container \"1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b\": container with ID starting with 1006938a70c20780c644e3aeeccdda13611ec6e4e145b1d03807418f73da076b not found: ID does not exist" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.235367 4771 scope.go:117] "RemoveContainer" containerID="b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09" Jan 23 14:49:13 crc kubenswrapper[4771]: E0123 14:49:13.236468 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09\": container with ID starting with b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09 not found: ID does not exist" containerID="b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.236542 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09"} err="failed to get container status \"b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09\": rpc error: code = NotFound desc = could not find container \"b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09\": container with ID starting with b18c1283c3847a2521ab4bf2a4228b009eb1dd3c8ae71c5cb41750e61e910f09 not found: ID does not exist" Jan 23 14:49:13 crc kubenswrapper[4771]: I0123 14:49:13.245344 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" path="/var/lib/kubelet/pods/09045d7b-6ef5-46ce-b7ab-f71e46048a1e/volumes" Jan 23 14:49:17 crc kubenswrapper[4771]: I0123 14:49:17.228320 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:49:17 crc kubenswrapper[4771]: E0123 14:49:17.229570 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:49:29 crc kubenswrapper[4771]: I0123 14:49:29.236595 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:49:29 crc kubenswrapper[4771]: E0123 14:49:29.237441 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:49:40 crc kubenswrapper[4771]: I0123 14:49:40.229174 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:49:40 crc kubenswrapper[4771]: E0123 14:49:40.230323 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:49:55 crc kubenswrapper[4771]: I0123 14:49:55.229009 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:49:55 crc kubenswrapper[4771]: E0123 14:49:55.229933 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:50:07 crc kubenswrapper[4771]: I0123 14:50:07.228261 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:50:07 crc kubenswrapper[4771]: I0123 14:50:07.749913 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68"} Jan 23 14:52:30 crc kubenswrapper[4771]: I0123 14:52:30.312151 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:52:30 crc kubenswrapper[4771]: I0123 14:52:30.312998 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.063159 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:31 crc kubenswrapper[4771]: E0123 14:52:31.066103 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="extract-content" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.066190 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="extract-content" Jan 23 14:52:31 crc kubenswrapper[4771]: E0123 14:52:31.066256 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="registry-server" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.066268 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="registry-server" Jan 23 14:52:31 crc kubenswrapper[4771]: E0123 14:52:31.066315 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="extract-utilities" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.066326 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="extract-utilities" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.066802 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="09045d7b-6ef5-46ce-b7ab-f71e46048a1e" containerName="registry-server" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.069958 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.098422 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.179043 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.179244 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clqn\" (UniqueName: \"kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.179281 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.282551 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.282719 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clqn\" (UniqueName: \"kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.282753 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.283791 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.284060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.312310 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clqn\" (UniqueName: \"kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn\") pod \"community-operators-dlhj9\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:31 crc kubenswrapper[4771]: I0123 14:52:31.403650 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:32 crc kubenswrapper[4771]: I0123 14:52:32.128092 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:32 crc kubenswrapper[4771]: I0123 14:52:32.427508 4771 generic.go:334] "Generic (PLEG): container finished" podID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerID="c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e" exitCode=0 Jan 23 14:52:32 crc kubenswrapper[4771]: I0123 14:52:32.427666 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerDied","Data":"c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e"} Jan 23 14:52:32 crc kubenswrapper[4771]: I0123 14:52:32.427975 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerStarted","Data":"cfba2b32bafe0a3b4a5083570dfe080b77cc5c5be6c60e63183fe1351f9db4ed"} Jan 23 14:52:34 crc kubenswrapper[4771]: I0123 14:52:34.451790 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerStarted","Data":"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac"} Jan 23 14:52:35 crc kubenswrapper[4771]: I0123 14:52:35.465447 4771 generic.go:334] "Generic (PLEG): container finished" podID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerID="fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac" exitCode=0 Jan 23 14:52:35 crc kubenswrapper[4771]: I0123 14:52:35.465521 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerDied","Data":"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac"} Jan 23 14:52:36 crc kubenswrapper[4771]: I0123 14:52:36.480201 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerStarted","Data":"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf"} Jan 23 14:52:36 crc kubenswrapper[4771]: I0123 14:52:36.510712 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlhj9" podStartSLOduration=2.023767678 podStartE2EDuration="5.510680632s" podCreationTimestamp="2026-01-23 14:52:31 +0000 UTC" firstStartedPulling="2026-01-23 14:52:32.431801158 +0000 UTC m=+4793.454338783" lastFinishedPulling="2026-01-23 14:52:35.918714112 +0000 UTC m=+4796.941251737" observedRunningTime="2026-01-23 14:52:36.506543732 +0000 UTC m=+4797.529081377" watchObservedRunningTime="2026-01-23 14:52:36.510680632 +0000 UTC m=+4797.533218277" Jan 23 14:52:41 crc kubenswrapper[4771]: I0123 14:52:41.403974 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:41 crc kubenswrapper[4771]: I0123 14:52:41.405917 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:41 crc kubenswrapper[4771]: I0123 14:52:41.463712 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:41 crc kubenswrapper[4771]: I0123 14:52:41.708441 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:41 crc kubenswrapper[4771]: I0123 14:52:41.829894 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:43 crc kubenswrapper[4771]: I0123 14:52:43.567597 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlhj9" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="registry-server" containerID="cri-o://5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf" gracePeriod=2 Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.098960 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.133742 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clqn\" (UniqueName: \"kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn\") pod \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.133841 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content\") pod \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.134059 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities\") pod \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\" (UID: \"0216ffcb-2ded-4c24-a0ec-36611e90ae4a\") " Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.135074 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities" (OuterVolumeSpecName: "utilities") pod "0216ffcb-2ded-4c24-a0ec-36611e90ae4a" (UID: "0216ffcb-2ded-4c24-a0ec-36611e90ae4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.146808 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn" (OuterVolumeSpecName: "kube-api-access-5clqn") pod "0216ffcb-2ded-4c24-a0ec-36611e90ae4a" (UID: "0216ffcb-2ded-4c24-a0ec-36611e90ae4a"). InnerVolumeSpecName "kube-api-access-5clqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.197174 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0216ffcb-2ded-4c24-a0ec-36611e90ae4a" (UID: "0216ffcb-2ded-4c24-a0ec-36611e90ae4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.237869 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clqn\" (UniqueName: \"kubernetes.io/projected/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-kube-api-access-5clqn\") on node \"crc\" DevicePath \"\"" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.237925 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.237938 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0216ffcb-2ded-4c24-a0ec-36611e90ae4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.583327 4771 generic.go:334] "Generic (PLEG): container finished" podID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerID="5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf" exitCode=0 Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.583394 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerDied","Data":"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf"} Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.583478 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlhj9" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.583503 4771 scope.go:117] "RemoveContainer" containerID="5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.583485 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlhj9" event={"ID":"0216ffcb-2ded-4c24-a0ec-36611e90ae4a","Type":"ContainerDied","Data":"cfba2b32bafe0a3b4a5083570dfe080b77cc5c5be6c60e63183fe1351f9db4ed"} Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.609233 4771 scope.go:117] "RemoveContainer" containerID="fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.627091 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.641941 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlhj9"] Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.656282 4771 scope.go:117] "RemoveContainer" containerID="c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.697244 4771 scope.go:117] "RemoveContainer" containerID="5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf" Jan 23 14:52:44 crc kubenswrapper[4771]: E0123 14:52:44.698214 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf\": container with ID starting with 5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf not found: ID does not exist" containerID="5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.698282 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf"} err="failed to get container status \"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf\": rpc error: code = NotFound desc = could not find container \"5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf\": container with ID starting with 5dbda7370d1a1f880f10766c278daa0e235101d53fbba03471ce40e891563fdf not found: ID does not exist" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.698327 4771 scope.go:117] "RemoveContainer" containerID="fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac" Jan 23 14:52:44 crc kubenswrapper[4771]: E0123 14:52:44.698983 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac\": container with ID starting with fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac not found: ID does not exist" containerID="fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.699024 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac"} err="failed to get container status \"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac\": rpc error: code = NotFound desc = could not find container \"fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac\": container with ID starting with fda3fedd8e8e981dee7b0839a383fb060349e1f5e95af6f5218563033bf0e8ac not found: ID does not exist" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.699039 4771 scope.go:117] "RemoveContainer" containerID="c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e" Jan 23 14:52:44 crc kubenswrapper[4771]: E0123 14:52:44.699371 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e\": container with ID starting with c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e not found: ID does not exist" containerID="c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e" Jan 23 14:52:44 crc kubenswrapper[4771]: I0123 14:52:44.699462 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e"} err="failed to get container status \"c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e\": rpc error: code = NotFound desc = could not find container \"c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e\": container with ID starting with c3206dbe8da57df4247fe87bf5e47af00cf2d3adf498ba84f96f946351dcdd4e not found: ID does not exist" Jan 23 14:52:45 crc kubenswrapper[4771]: I0123 14:52:45.241693 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" path="/var/lib/kubelet/pods/0216ffcb-2ded-4c24-a0ec-36611e90ae4a/volumes" Jan 23 14:53:00 crc kubenswrapper[4771]: I0123 14:53:00.312351 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:53:00 crc kubenswrapper[4771]: I0123 14:53:00.313090 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:53:21 crc kubenswrapper[4771]: E0123 14:53:21.576191 4771 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.243:57806->38.102.83.243:45109: read tcp 38.102.83.243:57806->38.102.83.243:45109: read: connection reset by peer Jan 23 14:53:30 crc kubenswrapper[4771]: I0123 14:53:30.312219 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:53:30 crc kubenswrapper[4771]: I0123 14:53:30.313062 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:53:30 crc kubenswrapper[4771]: I0123 14:53:30.313116 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:53:30 crc kubenswrapper[4771]: I0123 14:53:30.314106 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:53:30 crc kubenswrapper[4771]: I0123 14:53:30.314181 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68" gracePeriod=600 Jan 23 14:53:31 crc kubenswrapper[4771]: I0123 14:53:31.079953 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68" exitCode=0 Jan 23 14:53:31 crc kubenswrapper[4771]: I0123 14:53:31.080033 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68"} Jan 23 14:53:31 crc kubenswrapper[4771]: I0123 14:53:31.080782 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb"} Jan 23 14:53:31 crc kubenswrapper[4771]: I0123 14:53:31.080810 4771 scope.go:117] "RemoveContainer" containerID="79d88c30f15eab022a9516ed0f1bbcb94d28a049015ad2b2da1cec0b8c4c3564" Jan 23 14:55:30 crc kubenswrapper[4771]: I0123 14:55:30.312597 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:55:30 crc kubenswrapper[4771]: I0123 14:55:30.313223 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:56:00 crc kubenswrapper[4771]: I0123 14:56:00.312186 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:56:00 crc kubenswrapper[4771]: I0123 14:56:00.314062 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:56:30 crc kubenswrapper[4771]: I0123 14:56:30.312374 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:56:30 crc kubenswrapper[4771]: I0123 14:56:30.313095 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:56:30 crc kubenswrapper[4771]: I0123 14:56:30.313149 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 14:56:30 crc kubenswrapper[4771]: I0123 14:56:30.314152 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:56:30 crc kubenswrapper[4771]: I0123 14:56:30.314211 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" gracePeriod=600 Jan 23 14:56:30 crc kubenswrapper[4771]: E0123 14:56:30.447986 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:56:31 crc kubenswrapper[4771]: I0123 14:56:31.262754 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" exitCode=0 Jan 23 14:56:31 crc kubenswrapper[4771]: I0123 14:56:31.262825 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb"} Jan 23 14:56:31 crc kubenswrapper[4771]: I0123 14:56:31.262906 4771 scope.go:117] "RemoveContainer" containerID="167af503c8d7547b9c66625aa6ba96b249098b60ee05bfc9535fb20332921a68" Jan 23 14:56:31 crc kubenswrapper[4771]: I0123 14:56:31.263845 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:56:31 crc kubenswrapper[4771]: E0123 14:56:31.264172 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:56:45 crc kubenswrapper[4771]: I0123 14:56:45.228828 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:56:45 crc kubenswrapper[4771]: E0123 14:56:45.229934 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:56:59 crc kubenswrapper[4771]: I0123 14:56:59.237218 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:56:59 crc kubenswrapper[4771]: E0123 14:56:59.238209 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:57:13 crc kubenswrapper[4771]: I0123 14:57:13.229874 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:57:13 crc kubenswrapper[4771]: E0123 14:57:13.231060 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:57:24 crc kubenswrapper[4771]: I0123 14:57:24.228822 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:57:24 crc kubenswrapper[4771]: E0123 14:57:24.230386 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:57:36 crc kubenswrapper[4771]: I0123 14:57:36.228938 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:57:36 crc kubenswrapper[4771]: E0123 14:57:36.229929 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:57:48 crc kubenswrapper[4771]: I0123 14:57:48.228983 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:57:48 crc kubenswrapper[4771]: E0123 14:57:48.230030 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:58:02 crc kubenswrapper[4771]: I0123 14:58:02.228100 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:58:02 crc kubenswrapper[4771]: E0123 14:58:02.229163 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:58:15 crc kubenswrapper[4771]: I0123 14:58:15.229844 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:58:15 crc kubenswrapper[4771]: E0123 14:58:15.232572 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:58:26 crc kubenswrapper[4771]: I0123 14:58:26.229323 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:58:26 crc kubenswrapper[4771]: E0123 14:58:26.230436 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:58:41 crc kubenswrapper[4771]: I0123 14:58:41.229324 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:58:41 crc kubenswrapper[4771]: E0123 14:58:41.230531 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:58:53 crc kubenswrapper[4771]: I0123 14:58:53.227986 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:58:53 crc kubenswrapper[4771]: E0123 14:58:53.229095 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:59:04 crc kubenswrapper[4771]: I0123 14:59:04.229173 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:59:04 crc kubenswrapper[4771]: E0123 14:59:04.230291 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.860211 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:13 crc kubenswrapper[4771]: E0123 14:59:13.861658 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="extract-content" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.861681 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="extract-content" Jan 23 14:59:13 crc kubenswrapper[4771]: E0123 14:59:13.861704 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="registry-server" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.861711 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="registry-server" Jan 23 14:59:13 crc kubenswrapper[4771]: E0123 14:59:13.861777 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="extract-utilities" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.861788 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="extract-utilities" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.862055 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216ffcb-2ded-4c24-a0ec-36611e90ae4a" containerName="registry-server" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.864084 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.874473 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.905180 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.905438 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:13 crc kubenswrapper[4771]: I0123 14:59:13.905619 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lljr\" (UniqueName: \"kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.009169 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.009318 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.009427 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lljr\" (UniqueName: \"kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.009844 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.010299 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.038223 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lljr\" (UniqueName: \"kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr\") pod \"certified-operators-25bk9\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.200375 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:14 crc kubenswrapper[4771]: I0123 14:59:14.843171 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:15 crc kubenswrapper[4771]: I0123 14:59:15.114874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerStarted","Data":"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c"} Jan 23 14:59:15 crc kubenswrapper[4771]: I0123 14:59:15.115255 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerStarted","Data":"5e733bcbfd74db794ce793b6449a40666e4e09deffa543872f170f12e7ff929a"} Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.145765 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerID="28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c" exitCode=0 Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.145903 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerDied","Data":"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c"} Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.153206 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.279936 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.285300 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.316604 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.405154 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2nm7\" (UniqueName: \"kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.405556 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.405790 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.509092 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.509283 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.509460 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2nm7\" (UniqueName: \"kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.509691 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.509843 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.540710 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2nm7\" (UniqueName: \"kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7\") pod \"redhat-marketplace-cnrpv\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.623387 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.860289 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.863561 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.878107 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.919436 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.919823 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:16 crc kubenswrapper[4771]: I0123 14:59:16.919869 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5765r\" (UniqueName: \"kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.022491 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.022954 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5765r\" (UniqueName: \"kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.022988 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.023832 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.023849 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.050464 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5765r\" (UniqueName: \"kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r\") pod \"redhat-operators-2pt68\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.177921 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerStarted","Data":"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62"} Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.219682 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.284068 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:17 crc kubenswrapper[4771]: I0123 14:59:17.825517 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:18 crc kubenswrapper[4771]: I0123 14:59:18.206205 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerID="f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99" exitCode=0 Jan 23 14:59:18 crc kubenswrapper[4771]: I0123 14:59:18.206306 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerDied","Data":"f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99"} Jan 23 14:59:18 crc kubenswrapper[4771]: I0123 14:59:18.206642 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerStarted","Data":"b6c264502decac114be9540f4ea306f79a0be03aae77defefdb5680e5cc1337e"} Jan 23 14:59:18 crc kubenswrapper[4771]: I0123 14:59:18.211335 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerStarted","Data":"4d05dec440b7d59f10ade74acf12bdca2846e4b1b141f9aebb418a64116bf4e8"} Jan 23 14:59:19 crc kubenswrapper[4771]: I0123 14:59:19.226385 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerID="f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62" exitCode=0 Jan 23 14:59:19 crc kubenswrapper[4771]: I0123 14:59:19.226441 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerDied","Data":"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62"} Jan 23 14:59:19 crc kubenswrapper[4771]: I0123 14:59:19.229445 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:59:19 crc kubenswrapper[4771]: E0123 14:59:19.230909 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:59:19 crc kubenswrapper[4771]: I0123 14:59:19.230995 4771 generic.go:334] "Generic (PLEG): container finished" podID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerID="faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9" exitCode=0 Jan 23 14:59:19 crc kubenswrapper[4771]: I0123 14:59:19.253738 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerDied","Data":"faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9"} Jan 23 14:59:20 crc kubenswrapper[4771]: I0123 14:59:20.251436 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerID="8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25" exitCode=0 Jan 23 14:59:20 crc kubenswrapper[4771]: I0123 14:59:20.251555 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerDied","Data":"8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25"} Jan 23 14:59:20 crc kubenswrapper[4771]: I0123 14:59:20.258645 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerStarted","Data":"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e"} Jan 23 14:59:20 crc kubenswrapper[4771]: I0123 14:59:20.337430 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-25bk9" podStartSLOduration=3.872638501 podStartE2EDuration="7.337376595s" podCreationTimestamp="2026-01-23 14:59:13 +0000 UTC" firstStartedPulling="2026-01-23 14:59:16.152943232 +0000 UTC m=+5197.175480857" lastFinishedPulling="2026-01-23 14:59:19.617681326 +0000 UTC m=+5200.640218951" observedRunningTime="2026-01-23 14:59:20.324019274 +0000 UTC m=+5201.346556909" watchObservedRunningTime="2026-01-23 14:59:20.337376595 +0000 UTC m=+5201.359914240" Jan 23 14:59:21 crc kubenswrapper[4771]: I0123 14:59:21.273608 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerStarted","Data":"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe"} Jan 23 14:59:22 crc kubenswrapper[4771]: I0123 14:59:22.288688 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerStarted","Data":"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a"} Jan 23 14:59:22 crc kubenswrapper[4771]: I0123 14:59:22.324262 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnrpv" podStartSLOduration=3.440801018 podStartE2EDuration="6.324236039s" podCreationTimestamp="2026-01-23 14:59:16 +0000 UTC" firstStartedPulling="2026-01-23 14:59:18.209147203 +0000 UTC m=+5199.231684828" lastFinishedPulling="2026-01-23 14:59:21.092582184 +0000 UTC m=+5202.115119849" observedRunningTime="2026-01-23 14:59:22.314795311 +0000 UTC m=+5203.337332946" watchObservedRunningTime="2026-01-23 14:59:22.324236039 +0000 UTC m=+5203.346773674" Jan 23 14:59:24 crc kubenswrapper[4771]: I0123 14:59:24.200918 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:24 crc kubenswrapper[4771]: I0123 14:59:24.201389 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:25 crc kubenswrapper[4771]: I0123 14:59:25.257440 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-25bk9" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="registry-server" probeResult="failure" output=< Jan 23 14:59:25 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:59:25 crc kubenswrapper[4771]: > Jan 23 14:59:25 crc kubenswrapper[4771]: I0123 14:59:25.328156 4771 generic.go:334] "Generic (PLEG): container finished" podID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerID="492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe" exitCode=0 Jan 23 14:59:25 crc kubenswrapper[4771]: I0123 14:59:25.328228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerDied","Data":"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe"} Jan 23 14:59:26 crc kubenswrapper[4771]: I0123 14:59:26.343747 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerStarted","Data":"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328"} Jan 23 14:59:26 crc kubenswrapper[4771]: I0123 14:59:26.372052 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2pt68" podStartSLOduration=3.818726948 podStartE2EDuration="10.372027613s" podCreationTimestamp="2026-01-23 14:59:16 +0000 UTC" firstStartedPulling="2026-01-23 14:59:19.23286656 +0000 UTC m=+5200.255404185" lastFinishedPulling="2026-01-23 14:59:25.786167225 +0000 UTC m=+5206.808704850" observedRunningTime="2026-01-23 14:59:26.365021291 +0000 UTC m=+5207.387558926" watchObservedRunningTime="2026-01-23 14:59:26.372027613 +0000 UTC m=+5207.394565238" Jan 23 14:59:26 crc kubenswrapper[4771]: I0123 14:59:26.624372 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:26 crc kubenswrapper[4771]: I0123 14:59:26.626682 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:27 crc kubenswrapper[4771]: I0123 14:59:27.220700 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:27 crc kubenswrapper[4771]: I0123 14:59:27.221243 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:27 crc kubenswrapper[4771]: I0123 14:59:27.705432 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cnrpv" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="registry-server" probeResult="failure" output=< Jan 23 14:59:27 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:59:27 crc kubenswrapper[4771]: > Jan 23 14:59:28 crc kubenswrapper[4771]: I0123 14:59:28.295570 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pt68" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" probeResult="failure" output=< Jan 23 14:59:28 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:59:28 crc kubenswrapper[4771]: > Jan 23 14:59:30 crc kubenswrapper[4771]: I0123 14:59:30.229461 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:59:30 crc kubenswrapper[4771]: E0123 14:59:30.229823 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:59:34 crc kubenswrapper[4771]: I0123 14:59:34.255628 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:34 crc kubenswrapper[4771]: I0123 14:59:34.307570 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:36 crc kubenswrapper[4771]: I0123 14:59:36.684642 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:36 crc kubenswrapper[4771]: I0123 14:59:36.739261 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:38 crc kubenswrapper[4771]: I0123 14:59:38.283339 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2pt68" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" probeResult="failure" output=< Jan 23 14:59:38 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 14:59:38 crc kubenswrapper[4771]: > Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.439535 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.440289 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-25bk9" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="registry-server" containerID="cri-o://150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e" gracePeriod=2 Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.934049 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.987332 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content\") pod \"ca477697-1398-4a00-b93b-83d10aba2ba0\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.987462 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lljr\" (UniqueName: \"kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr\") pod \"ca477697-1398-4a00-b93b-83d10aba2ba0\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.987641 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities\") pod \"ca477697-1398-4a00-b93b-83d10aba2ba0\" (UID: \"ca477697-1398-4a00-b93b-83d10aba2ba0\") " Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.988234 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities" (OuterVolumeSpecName: "utilities") pod "ca477697-1398-4a00-b93b-83d10aba2ba0" (UID: "ca477697-1398-4a00-b93b-83d10aba2ba0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.989076 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:40 crc kubenswrapper[4771]: I0123 14:59:40.998894 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr" (OuterVolumeSpecName: "kube-api-access-6lljr") pod "ca477697-1398-4a00-b93b-83d10aba2ba0" (UID: "ca477697-1398-4a00-b93b-83d10aba2ba0"). InnerVolumeSpecName "kube-api-access-6lljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.042536 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca477697-1398-4a00-b93b-83d10aba2ba0" (UID: "ca477697-1398-4a00-b93b-83d10aba2ba0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.092316 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca477697-1398-4a00-b93b-83d10aba2ba0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.092362 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lljr\" (UniqueName: \"kubernetes.io/projected/ca477697-1398-4a00-b93b-83d10aba2ba0-kube-api-access-6lljr\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.230013 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:59:41 crc kubenswrapper[4771]: E0123 14:59:41.230459 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.437373 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.437707 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnrpv" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="registry-server" containerID="cri-o://aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a" gracePeriod=2 Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.512286 4771 generic.go:334] "Generic (PLEG): container finished" podID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerID="150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e" exitCode=0 Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.512341 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerDied","Data":"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e"} Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.512360 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-25bk9" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.512375 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-25bk9" event={"ID":"ca477697-1398-4a00-b93b-83d10aba2ba0","Type":"ContainerDied","Data":"5e733bcbfd74db794ce793b6449a40666e4e09deffa543872f170f12e7ff929a"} Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.512399 4771 scope.go:117] "RemoveContainer" containerID="150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.580097 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.602233 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-25bk9"] Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.615015 4771 scope.go:117] "RemoveContainer" containerID="f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.647762 4771 scope.go:117] "RemoveContainer" containerID="28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.709452 4771 scope.go:117] "RemoveContainer" containerID="150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e" Jan 23 14:59:41 crc kubenswrapper[4771]: E0123 14:59:41.717048 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e\": container with ID starting with 150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e not found: ID does not exist" containerID="150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.717119 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e"} err="failed to get container status \"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e\": rpc error: code = NotFound desc = could not find container \"150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e\": container with ID starting with 150020dd8bea9f6ffd492e27659306f98736980c77a789b010fa9a148bf7540e not found: ID does not exist" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.717154 4771 scope.go:117] "RemoveContainer" containerID="f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62" Jan 23 14:59:41 crc kubenswrapper[4771]: E0123 14:59:41.717817 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62\": container with ID starting with f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62 not found: ID does not exist" containerID="f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.717928 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62"} err="failed to get container status \"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62\": rpc error: code = NotFound desc = could not find container \"f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62\": container with ID starting with f9768dcf438b05ac80c44ca40ffb6765a2f76b9cdaf4dd21a5c2b6a1fd48aa62 not found: ID does not exist" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.717950 4771 scope.go:117] "RemoveContainer" containerID="28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c" Jan 23 14:59:41 crc kubenswrapper[4771]: E0123 14:59:41.718380 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c\": container with ID starting with 28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c not found: ID does not exist" containerID="28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c" Jan 23 14:59:41 crc kubenswrapper[4771]: I0123 14:59:41.718399 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c"} err="failed to get container status \"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c\": rpc error: code = NotFound desc = could not find container \"28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c\": container with ID starting with 28fce07ef0ff17a7106df7273c30133ac24c910aa3131827fced7e675745569c not found: ID does not exist" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.006664 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.138900 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content\") pod \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.139815 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2nm7\" (UniqueName: \"kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7\") pod \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.140234 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities\") pod \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\" (UID: \"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b\") " Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.142175 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities" (OuterVolumeSpecName: "utilities") pod "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" (UID: "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.142778 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.147118 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7" (OuterVolumeSpecName: "kube-api-access-m2nm7") pod "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" (UID: "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b"). InnerVolumeSpecName "kube-api-access-m2nm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.167777 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" (UID: "e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.245360 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.245426 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2nm7\" (UniqueName: \"kubernetes.io/projected/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b-kube-api-access-m2nm7\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.533972 4771 generic.go:334] "Generic (PLEG): container finished" podID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerID="aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a" exitCode=0 Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.534042 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerDied","Data":"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a"} Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.534085 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnrpv" event={"ID":"e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b","Type":"ContainerDied","Data":"b6c264502decac114be9540f4ea306f79a0be03aae77defefdb5680e5cc1337e"} Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.534119 4771 scope.go:117] "RemoveContainer" containerID="aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.535281 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnrpv" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.581760 4771 scope.go:117] "RemoveContainer" containerID="8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.594332 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.608896 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnrpv"] Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.613777 4771 scope.go:117] "RemoveContainer" containerID="f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.641979 4771 scope.go:117] "RemoveContainer" containerID="aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a" Jan 23 14:59:42 crc kubenswrapper[4771]: E0123 14:59:42.642680 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a\": container with ID starting with aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a not found: ID does not exist" containerID="aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.642758 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a"} err="failed to get container status \"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a\": rpc error: code = NotFound desc = could not find container \"aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a\": container with ID starting with aeffee51da524ca2914cca9ca3c6bfa9eb94b42f66c03aeb21d51591f479121a not found: ID does not exist" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.642815 4771 scope.go:117] "RemoveContainer" containerID="8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25" Jan 23 14:59:42 crc kubenswrapper[4771]: E0123 14:59:42.643647 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25\": container with ID starting with 8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25 not found: ID does not exist" containerID="8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.643758 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25"} err="failed to get container status \"8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25\": rpc error: code = NotFound desc = could not find container \"8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25\": container with ID starting with 8820bfacf1b05f64f8f2e04e3bcca815ef153aea8a691afc03c30f54ef9a8d25 not found: ID does not exist" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.643817 4771 scope.go:117] "RemoveContainer" containerID="f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99" Jan 23 14:59:42 crc kubenswrapper[4771]: E0123 14:59:42.644688 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99\": container with ID starting with f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99 not found: ID does not exist" containerID="f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99" Jan 23 14:59:42 crc kubenswrapper[4771]: I0123 14:59:42.644730 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99"} err="failed to get container status \"f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99\": rpc error: code = NotFound desc = could not find container \"f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99\": container with ID starting with f2f981dda262f63399a9e260f5c422fd6d07953de59c94292a77b1cc6f811d99 not found: ID does not exist" Jan 23 14:59:43 crc kubenswrapper[4771]: I0123 14:59:43.242772 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" path="/var/lib/kubelet/pods/ca477697-1398-4a00-b93b-83d10aba2ba0/volumes" Jan 23 14:59:43 crc kubenswrapper[4771]: I0123 14:59:43.243711 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" path="/var/lib/kubelet/pods/e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b/volumes" Jan 23 14:59:47 crc kubenswrapper[4771]: I0123 14:59:47.273994 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:47 crc kubenswrapper[4771]: I0123 14:59:47.333587 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.043902 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.044936 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2pt68" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" containerID="cri-o://6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328" gracePeriod=2 Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.575090 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.628020 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content\") pod \"481083f7-eb53-4b67-9286-80fed5d98e9a\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.628065 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities\") pod \"481083f7-eb53-4b67-9286-80fed5d98e9a\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.628239 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5765r\" (UniqueName: \"kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r\") pod \"481083f7-eb53-4b67-9286-80fed5d98e9a\" (UID: \"481083f7-eb53-4b67-9286-80fed5d98e9a\") " Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.630450 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities" (OuterVolumeSpecName: "utilities") pod "481083f7-eb53-4b67-9286-80fed5d98e9a" (UID: "481083f7-eb53-4b67-9286-80fed5d98e9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.636672 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r" (OuterVolumeSpecName: "kube-api-access-5765r") pod "481083f7-eb53-4b67-9286-80fed5d98e9a" (UID: "481083f7-eb53-4b67-9286-80fed5d98e9a"). InnerVolumeSpecName "kube-api-access-5765r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.654287 4771 generic.go:334] "Generic (PLEG): container finished" podID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerID="6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328" exitCode=0 Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.654342 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerDied","Data":"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328"} Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.654390 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2pt68" event={"ID":"481083f7-eb53-4b67-9286-80fed5d98e9a","Type":"ContainerDied","Data":"4d05dec440b7d59f10ade74acf12bdca2846e4b1b141f9aebb418a64116bf4e8"} Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.654422 4771 scope.go:117] "RemoveContainer" containerID="6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.654586 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2pt68" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.719108 4771 scope.go:117] "RemoveContainer" containerID="492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.731161 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.731208 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5765r\" (UniqueName: \"kubernetes.io/projected/481083f7-eb53-4b67-9286-80fed5d98e9a-kube-api-access-5765r\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.746383 4771 scope.go:117] "RemoveContainer" containerID="faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.761587 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481083f7-eb53-4b67-9286-80fed5d98e9a" (UID: "481083f7-eb53-4b67-9286-80fed5d98e9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.794706 4771 scope.go:117] "RemoveContainer" containerID="6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328" Jan 23 14:59:52 crc kubenswrapper[4771]: E0123 14:59:52.795454 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328\": container with ID starting with 6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328 not found: ID does not exist" containerID="6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.795487 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328"} err="failed to get container status \"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328\": rpc error: code = NotFound desc = could not find container \"6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328\": container with ID starting with 6c2801ee82789af0d308b4b88c5896cb8d3c229814c40d0dab04a319eb620328 not found: ID does not exist" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.795513 4771 scope.go:117] "RemoveContainer" containerID="492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe" Jan 23 14:59:52 crc kubenswrapper[4771]: E0123 14:59:52.795967 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe\": container with ID starting with 492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe not found: ID does not exist" containerID="492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.795993 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe"} err="failed to get container status \"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe\": rpc error: code = NotFound desc = could not find container \"492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe\": container with ID starting with 492356bb525af619b38499c0d543b4ce6dd9dadba4cd1f05b964e85350ae2bfe not found: ID does not exist" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.796008 4771 scope.go:117] "RemoveContainer" containerID="faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9" Jan 23 14:59:52 crc kubenswrapper[4771]: E0123 14:59:52.796423 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9\": container with ID starting with faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9 not found: ID does not exist" containerID="faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.796457 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9"} err="failed to get container status \"faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9\": rpc error: code = NotFound desc = could not find container \"faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9\": container with ID starting with faa302a45b17ceb10cdafaf61ede8de6a478f9324f8341c72b9997c254c8efa9 not found: ID does not exist" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.833512 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481083f7-eb53-4b67-9286-80fed5d98e9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:59:52 crc kubenswrapper[4771]: I0123 14:59:52.991812 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:53 crc kubenswrapper[4771]: I0123 14:59:53.003068 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2pt68"] Jan 23 14:59:53 crc kubenswrapper[4771]: I0123 14:59:53.241564 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" path="/var/lib/kubelet/pods/481083f7-eb53-4b67-9286-80fed5d98e9a/volumes" Jan 23 14:59:55 crc kubenswrapper[4771]: I0123 14:59:55.229274 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 14:59:55 crc kubenswrapper[4771]: E0123 14:59:55.231819 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.164887 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt"] Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166388 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166431 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166463 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166472 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166489 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166498 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166545 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166556 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166569 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166576 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166585 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166592 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="extract-utilities" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166606 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166613 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="extract-content" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166625 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166631 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: E0123 15:00:00.166644 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166651 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166920 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="481083f7-eb53-4b67-9286-80fed5d98e9a" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166938 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a59d6b-0a64-48cc-b4ce-ea8d39d11f0b" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.166967 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca477697-1398-4a00-b93b-83d10aba2ba0" containerName="registry-server" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.168063 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.170615 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.171233 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.179859 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt"] Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.223978 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.224660 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.224811 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf85q\" (UniqueName: \"kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.327978 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.328047 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf85q\" (UniqueName: \"kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.328144 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.329119 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.336145 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.349286 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf85q\" (UniqueName: \"kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q\") pod \"collect-profiles-29486340-z8sbt\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.502066 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:00 crc kubenswrapper[4771]: I0123 15:00:00.990227 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt"] Jan 23 15:00:01 crc kubenswrapper[4771]: I0123 15:00:01.752228 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" event={"ID":"7294da89-427b-49b3-b826-5de0def171f3","Type":"ContainerStarted","Data":"0cf4485187ebab45ae45df4df9b4b7ae64ab0d5fe52889e12dbea77c638ac688"} Jan 23 15:00:01 crc kubenswrapper[4771]: I0123 15:00:01.752594 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" event={"ID":"7294da89-427b-49b3-b826-5de0def171f3","Type":"ContainerStarted","Data":"6fa77a15000a9fe08165fd359dddf1235af9fcf69f0580006fbcd813e337f687"} Jan 23 15:00:01 crc kubenswrapper[4771]: I0123 15:00:01.772159 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" podStartSLOduration=1.772133788 podStartE2EDuration="1.772133788s" podCreationTimestamp="2026-01-23 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 15:00:01.76679746 +0000 UTC m=+5242.789335105" watchObservedRunningTime="2026-01-23 15:00:01.772133788 +0000 UTC m=+5242.794671413" Jan 23 15:00:02 crc kubenswrapper[4771]: I0123 15:00:02.763759 4771 generic.go:334] "Generic (PLEG): container finished" podID="7294da89-427b-49b3-b826-5de0def171f3" containerID="0cf4485187ebab45ae45df4df9b4b7ae64ab0d5fe52889e12dbea77c638ac688" exitCode=0 Jan 23 15:00:02 crc kubenswrapper[4771]: I0123 15:00:02.763822 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" event={"ID":"7294da89-427b-49b3-b826-5de0def171f3","Type":"ContainerDied","Data":"0cf4485187ebab45ae45df4df9b4b7ae64ab0d5fe52889e12dbea77c638ac688"} Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.176816 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.240082 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume\") pod \"7294da89-427b-49b3-b826-5de0def171f3\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.240287 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf85q\" (UniqueName: \"kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q\") pod \"7294da89-427b-49b3-b826-5de0def171f3\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.240344 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume\") pod \"7294da89-427b-49b3-b826-5de0def171f3\" (UID: \"7294da89-427b-49b3-b826-5de0def171f3\") " Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.241011 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "7294da89-427b-49b3-b826-5de0def171f3" (UID: "7294da89-427b-49b3-b826-5de0def171f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.241731 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7294da89-427b-49b3-b826-5de0def171f3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.248734 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q" (OuterVolumeSpecName: "kube-api-access-hf85q") pod "7294da89-427b-49b3-b826-5de0def171f3" (UID: "7294da89-427b-49b3-b826-5de0def171f3"). InnerVolumeSpecName "kube-api-access-hf85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.249017 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7294da89-427b-49b3-b826-5de0def171f3" (UID: "7294da89-427b-49b3-b826-5de0def171f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.344077 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf85q\" (UniqueName: \"kubernetes.io/projected/7294da89-427b-49b3-b826-5de0def171f3-kube-api-access-hf85q\") on node \"crc\" DevicePath \"\"" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.344122 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7294da89-427b-49b3-b826-5de0def171f3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.783962 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" event={"ID":"7294da89-427b-49b3-b826-5de0def171f3","Type":"ContainerDied","Data":"6fa77a15000a9fe08165fd359dddf1235af9fcf69f0580006fbcd813e337f687"} Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.784006 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fa77a15000a9fe08165fd359dddf1235af9fcf69f0580006fbcd813e337f687" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.784389 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486340-z8sbt" Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.855240 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x"] Jan 23 15:00:04 crc kubenswrapper[4771]: I0123 15:00:04.867606 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-k696x"] Jan 23 15:00:05 crc kubenswrapper[4771]: I0123 15:00:05.241079 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb9bebd-ad35-4557-a357-ce8f389131fc" path="/var/lib/kubelet/pods/5fb9bebd-ad35-4557-a357-ce8f389131fc/volumes" Jan 23 15:00:10 crc kubenswrapper[4771]: I0123 15:00:10.228296 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:00:10 crc kubenswrapper[4771]: E0123 15:00:10.229174 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:00:23 crc kubenswrapper[4771]: I0123 15:00:23.228918 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:00:23 crc kubenswrapper[4771]: E0123 15:00:23.229857 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:00:35 crc kubenswrapper[4771]: I0123 15:00:35.229454 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:00:35 crc kubenswrapper[4771]: E0123 15:00:35.230502 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:00:47 crc kubenswrapper[4771]: I0123 15:00:47.228230 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:00:47 crc kubenswrapper[4771]: E0123 15:00:47.229376 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:00:59 crc kubenswrapper[4771]: I0123 15:00:59.236380 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:00:59 crc kubenswrapper[4771]: E0123 15:00:59.238546 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.163055 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486341-k2bv2"] Jan 23 15:01:00 crc kubenswrapper[4771]: E0123 15:01:00.163679 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7294da89-427b-49b3-b826-5de0def171f3" containerName="collect-profiles" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.163700 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="7294da89-427b-49b3-b826-5de0def171f3" containerName="collect-profiles" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.163993 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="7294da89-427b-49b3-b826-5de0def171f3" containerName="collect-profiles" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.165018 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.174787 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486341-k2bv2"] Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.223327 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwx49\" (UniqueName: \"kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.223714 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.223846 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.224027 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.326254 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.326435 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.326617 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwx49\" (UniqueName: \"kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.326792 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.338479 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.338688 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.338804 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.347124 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwx49\" (UniqueName: \"kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49\") pod \"keystone-cron-29486341-k2bv2\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.496014 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:00 crc kubenswrapper[4771]: I0123 15:01:00.894079 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486341-k2bv2"] Jan 23 15:01:01 crc kubenswrapper[4771]: I0123 15:01:01.387863 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486341-k2bv2" event={"ID":"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013","Type":"ContainerStarted","Data":"a07b98f4a8288dd259274d5a1707822aa4c07b349b8a208a6527f968d0014cb0"} Jan 23 15:01:01 crc kubenswrapper[4771]: I0123 15:01:01.388133 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486341-k2bv2" event={"ID":"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013","Type":"ContainerStarted","Data":"d08ce13d35fbf1097aaed5b4c34c9125dffba171646a575ea9cb65b09cd0e010"} Jan 23 15:01:01 crc kubenswrapper[4771]: I0123 15:01:01.407519 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486341-k2bv2" podStartSLOduration=1.40749396 podStartE2EDuration="1.40749396s" podCreationTimestamp="2026-01-23 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 15:01:01.403731632 +0000 UTC m=+5302.426269257" watchObservedRunningTime="2026-01-23 15:01:01.40749396 +0000 UTC m=+5302.430031585" Jan 23 15:01:01 crc kubenswrapper[4771]: I0123 15:01:01.453898 4771 scope.go:117] "RemoveContainer" containerID="d22af3278c3b1636c8ff7bd43798a40c3bfbfd981066ff611bd19afe1093f1b6" Jan 23 15:01:05 crc kubenswrapper[4771]: I0123 15:01:05.427016 4771 generic.go:334] "Generic (PLEG): container finished" podID="db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" containerID="a07b98f4a8288dd259274d5a1707822aa4c07b349b8a208a6527f968d0014cb0" exitCode=0 Jan 23 15:01:05 crc kubenswrapper[4771]: I0123 15:01:05.427103 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486341-k2bv2" event={"ID":"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013","Type":"ContainerDied","Data":"a07b98f4a8288dd259274d5a1707822aa4c07b349b8a208a6527f968d0014cb0"} Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.846235 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.915586 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle\") pod \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.916114 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys\") pod \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.916243 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwx49\" (UniqueName: \"kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49\") pod \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.916590 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data\") pod \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\" (UID: \"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013\") " Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.925639 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49" (OuterVolumeSpecName: "kube-api-access-gwx49") pod "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" (UID: "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013"). InnerVolumeSpecName "kube-api-access-gwx49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.927649 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" (UID: "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.960138 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" (UID: "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:01:06 crc kubenswrapper[4771]: I0123 15:01:06.997003 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data" (OuterVolumeSpecName: "config-data") pod "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" (UID: "db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.020018 4771 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.020329 4771 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.020476 4771 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.020572 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwx49\" (UniqueName: \"kubernetes.io/projected/db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013-kube-api-access-gwx49\") on node \"crc\" DevicePath \"\"" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.452010 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486341-k2bv2" event={"ID":"db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013","Type":"ContainerDied","Data":"d08ce13d35fbf1097aaed5b4c34c9125dffba171646a575ea9cb65b09cd0e010"} Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.452091 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08ce13d35fbf1097aaed5b4c34c9125dffba171646a575ea9cb65b09cd0e010" Jan 23 15:01:07 crc kubenswrapper[4771]: I0123 15:01:07.452048 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486341-k2bv2" Jan 23 15:01:12 crc kubenswrapper[4771]: I0123 15:01:12.229150 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:01:12 crc kubenswrapper[4771]: E0123 15:01:12.230286 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:01:24 crc kubenswrapper[4771]: I0123 15:01:24.229168 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:01:24 crc kubenswrapper[4771]: E0123 15:01:24.230164 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:01:38 crc kubenswrapper[4771]: I0123 15:01:38.229458 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:01:38 crc kubenswrapper[4771]: I0123 15:01:38.826748 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d"} Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.161841 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:10 crc kubenswrapper[4771]: E0123 15:03:10.164265 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" containerName="keystone-cron" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.164354 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" containerName="keystone-cron" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.164664 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013" containerName="keystone-cron" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.166392 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.179693 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.330007 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jp9z\" (UniqueName: \"kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.330103 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.330134 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.432281 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.432518 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jp9z\" (UniqueName: \"kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.432588 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.433041 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.433372 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.464452 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jp9z\" (UniqueName: \"kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z\") pod \"community-operators-f9t57\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:10 crc kubenswrapper[4771]: I0123 15:03:10.494088 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:11 crc kubenswrapper[4771]: I0123 15:03:11.062266 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:11 crc kubenswrapper[4771]: I0123 15:03:11.800319 4771 generic.go:334] "Generic (PLEG): container finished" podID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerID="3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604" exitCode=0 Jan 23 15:03:11 crc kubenswrapper[4771]: I0123 15:03:11.800396 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerDied","Data":"3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604"} Jan 23 15:03:11 crc kubenswrapper[4771]: I0123 15:03:11.802152 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerStarted","Data":"80559135a195291f8ecccf7a9fe9752d7b2216cc9ce43ba95741d24e98a1cc37"} Jan 23 15:03:12 crc kubenswrapper[4771]: I0123 15:03:12.813204 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerStarted","Data":"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427"} Jan 23 15:03:13 crc kubenswrapper[4771]: I0123 15:03:13.825994 4771 generic.go:334] "Generic (PLEG): container finished" podID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerID="c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427" exitCode=0 Jan 23 15:03:13 crc kubenswrapper[4771]: I0123 15:03:13.826045 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerDied","Data":"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427"} Jan 23 15:03:14 crc kubenswrapper[4771]: I0123 15:03:14.840526 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerStarted","Data":"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195"} Jan 23 15:03:14 crc kubenswrapper[4771]: I0123 15:03:14.862890 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f9t57" podStartSLOduration=2.36699592 podStartE2EDuration="4.862866919s" podCreationTimestamp="2026-01-23 15:03:10 +0000 UTC" firstStartedPulling="2026-01-23 15:03:11.802857232 +0000 UTC m=+5432.825394857" lastFinishedPulling="2026-01-23 15:03:14.298728221 +0000 UTC m=+5435.321265856" observedRunningTime="2026-01-23 15:03:14.861581339 +0000 UTC m=+5435.884118974" watchObservedRunningTime="2026-01-23 15:03:14.862866919 +0000 UTC m=+5435.885404544" Jan 23 15:03:20 crc kubenswrapper[4771]: I0123 15:03:20.494580 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:20 crc kubenswrapper[4771]: I0123 15:03:20.495151 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:20 crc kubenswrapper[4771]: I0123 15:03:20.562207 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:20 crc kubenswrapper[4771]: I0123 15:03:20.992617 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:21 crc kubenswrapper[4771]: I0123 15:03:21.052633 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:22 crc kubenswrapper[4771]: I0123 15:03:22.918721 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f9t57" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="registry-server" containerID="cri-o://8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195" gracePeriod=2 Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.456982 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.580472 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content\") pod \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.580640 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities\") pod \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.580711 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jp9z\" (UniqueName: \"kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z\") pod \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\" (UID: \"68cb5e7a-de96-4970-bfc8-a2948d224e8f\") " Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.581562 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities" (OuterVolumeSpecName: "utilities") pod "68cb5e7a-de96-4970-bfc8-a2948d224e8f" (UID: "68cb5e7a-de96-4970-bfc8-a2948d224e8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.599500 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z" (OuterVolumeSpecName: "kube-api-access-2jp9z") pod "68cb5e7a-de96-4970-bfc8-a2948d224e8f" (UID: "68cb5e7a-de96-4970-bfc8-a2948d224e8f"). InnerVolumeSpecName "kube-api-access-2jp9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.684124 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.685360 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jp9z\" (UniqueName: \"kubernetes.io/projected/68cb5e7a-de96-4970-bfc8-a2948d224e8f-kube-api-access-2jp9z\") on node \"crc\" DevicePath \"\"" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.932618 4771 generic.go:334] "Generic (PLEG): container finished" podID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerID="8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195" exitCode=0 Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.932662 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerDied","Data":"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195"} Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.932720 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f9t57" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.932753 4771 scope.go:117] "RemoveContainer" containerID="8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.932733 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f9t57" event={"ID":"68cb5e7a-de96-4970-bfc8-a2948d224e8f","Type":"ContainerDied","Data":"80559135a195291f8ecccf7a9fe9752d7b2216cc9ce43ba95741d24e98a1cc37"} Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.957580 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68cb5e7a-de96-4970-bfc8-a2948d224e8f" (UID: "68cb5e7a-de96-4970-bfc8-a2948d224e8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.972346 4771 scope.go:117] "RemoveContainer" containerID="c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.993727 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68cb5e7a-de96-4970-bfc8-a2948d224e8f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:03:23 crc kubenswrapper[4771]: I0123 15:03:23.996606 4771 scope.go:117] "RemoveContainer" containerID="3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.054950 4771 scope.go:117] "RemoveContainer" containerID="8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195" Jan 23 15:03:24 crc kubenswrapper[4771]: E0123 15:03:24.056242 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195\": container with ID starting with 8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195 not found: ID does not exist" containerID="8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.056304 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195"} err="failed to get container status \"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195\": rpc error: code = NotFound desc = could not find container \"8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195\": container with ID starting with 8885c4f13c09ee7e3bf517f248011115c5f328ef5371f51a217344aa4fbde195 not found: ID does not exist" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.056355 4771 scope.go:117] "RemoveContainer" containerID="c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427" Jan 23 15:03:24 crc kubenswrapper[4771]: E0123 15:03:24.057706 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427\": container with ID starting with c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427 not found: ID does not exist" containerID="c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.057741 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427"} err="failed to get container status \"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427\": rpc error: code = NotFound desc = could not find container \"c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427\": container with ID starting with c4979e3f8e8181de433ca638febcacffffbd3fee76b60d7f0fec4ca8a2782427 not found: ID does not exist" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.057761 4771 scope.go:117] "RemoveContainer" containerID="3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604" Jan 23 15:03:24 crc kubenswrapper[4771]: E0123 15:03:24.058179 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604\": container with ID starting with 3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604 not found: ID does not exist" containerID="3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.058202 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604"} err="failed to get container status \"3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604\": rpc error: code = NotFound desc = could not find container \"3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604\": container with ID starting with 3dafebfe9121e876b5ea538be76cb046dbff5a1bf44760e0029ba9ee93429604 not found: ID does not exist" Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.269067 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:24 crc kubenswrapper[4771]: I0123 15:03:24.282509 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f9t57"] Jan 23 15:03:25 crc kubenswrapper[4771]: I0123 15:03:25.245169 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" path="/var/lib/kubelet/pods/68cb5e7a-de96-4970-bfc8-a2948d224e8f/volumes" Jan 23 15:04:00 crc kubenswrapper[4771]: I0123 15:04:00.311937 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:04:00 crc kubenswrapper[4771]: I0123 15:04:00.312590 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:04:30 crc kubenswrapper[4771]: I0123 15:04:30.312888 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:04:30 crc kubenswrapper[4771]: I0123 15:04:30.313514 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:05:00 crc kubenswrapper[4771]: I0123 15:05:00.311445 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:05:00 crc kubenswrapper[4771]: I0123 15:05:00.311868 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:05:00 crc kubenswrapper[4771]: I0123 15:05:00.311911 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:05:00 crc kubenswrapper[4771]: I0123 15:05:00.312379 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:05:00 crc kubenswrapper[4771]: I0123 15:05:00.312444 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d" gracePeriod=600 Jan 23 15:05:01 crc kubenswrapper[4771]: I0123 15:05:01.025389 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d" exitCode=0 Jan 23 15:05:01 crc kubenswrapper[4771]: I0123 15:05:01.025585 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d"} Jan 23 15:05:01 crc kubenswrapper[4771]: I0123 15:05:01.026295 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf"} Jan 23 15:05:01 crc kubenswrapper[4771]: I0123 15:05:01.026324 4771 scope.go:117] "RemoveContainer" containerID="35de8313516e0e5d3b9ef93edff4d9daf2b391a8ba17db17e69c8652ce500bfb" Jan 23 15:07:00 crc kubenswrapper[4771]: I0123 15:07:00.311977 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:07:00 crc kubenswrapper[4771]: I0123 15:07:00.312544 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:07:30 crc kubenswrapper[4771]: I0123 15:07:30.312275 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:07:30 crc kubenswrapper[4771]: I0123 15:07:30.312776 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.312156 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.312871 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.312930 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.313903 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.313969 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" gracePeriod=600 Jan 23 15:08:00 crc kubenswrapper[4771]: E0123 15:08:00.435290 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.908161 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" exitCode=0 Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.908334 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf"} Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.908858 4771 scope.go:117] "RemoveContainer" containerID="24eff8549632849720fabda7c84bc3f93022730cbf5277518f1ab0179ebc126d" Jan 23 15:08:00 crc kubenswrapper[4771]: I0123 15:08:00.909749 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:08:00 crc kubenswrapper[4771]: E0123 15:08:00.910051 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:08:12 crc kubenswrapper[4771]: I0123 15:08:12.228986 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:08:12 crc kubenswrapper[4771]: E0123 15:08:12.230077 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:08:26 crc kubenswrapper[4771]: I0123 15:08:26.229793 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:08:26 crc kubenswrapper[4771]: E0123 15:08:26.231227 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:08:38 crc kubenswrapper[4771]: I0123 15:08:38.228298 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:08:38 crc kubenswrapper[4771]: E0123 15:08:38.229298 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:08:53 crc kubenswrapper[4771]: I0123 15:08:53.229111 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:08:53 crc kubenswrapper[4771]: E0123 15:08:53.229980 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:06 crc kubenswrapper[4771]: I0123 15:09:06.228747 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:09:06 crc kubenswrapper[4771]: E0123 15:09:06.230115 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:19 crc kubenswrapper[4771]: I0123 15:09:19.248261 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:09:19 crc kubenswrapper[4771]: E0123 15:09:19.255435 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.037916 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:29 crc kubenswrapper[4771]: E0123 15:09:29.039246 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="extract-utilities" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.039260 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="extract-utilities" Jan 23 15:09:29 crc kubenswrapper[4771]: E0123 15:09:29.039282 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="registry-server" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.039289 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="registry-server" Jan 23 15:09:29 crc kubenswrapper[4771]: E0123 15:09:29.039322 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="extract-content" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.039328 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="extract-content" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.039549 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="68cb5e7a-de96-4970-bfc8-a2948d224e8f" containerName="registry-server" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.041185 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.050301 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.110324 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.110388 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.110989 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ntmc\" (UniqueName: \"kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.213034 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.213178 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ntmc\" (UniqueName: \"kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.213294 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.213672 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.213741 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.233794 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ntmc\" (UniqueName: \"kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc\") pod \"certified-operators-dz4l5\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.365011 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:29 crc kubenswrapper[4771]: I0123 15:09:29.949491 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:30 crc kubenswrapper[4771]: I0123 15:09:30.791460 4771 generic.go:334] "Generic (PLEG): container finished" podID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerID="39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345" exitCode=0 Jan 23 15:09:30 crc kubenswrapper[4771]: I0123 15:09:30.791498 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerDied","Data":"39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345"} Jan 23 15:09:30 crc kubenswrapper[4771]: I0123 15:09:30.791718 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerStarted","Data":"55b411845e8d921dcfb2c268fbce9a48a8911efefe954d8e9281033f554c5ccc"} Jan 23 15:09:30 crc kubenswrapper[4771]: I0123 15:09:30.793426 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 15:09:31 crc kubenswrapper[4771]: I0123 15:09:31.229028 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:09:31 crc kubenswrapper[4771]: E0123 15:09:31.229502 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:32 crc kubenswrapper[4771]: I0123 15:09:32.821959 4771 generic.go:334] "Generic (PLEG): container finished" podID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerID="c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7" exitCode=0 Jan 23 15:09:32 crc kubenswrapper[4771]: I0123 15:09:32.822434 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerDied","Data":"c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7"} Jan 23 15:09:34 crc kubenswrapper[4771]: I0123 15:09:34.868874 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerStarted","Data":"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67"} Jan 23 15:09:34 crc kubenswrapper[4771]: I0123 15:09:34.899977 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dz4l5" podStartSLOduration=3.41618162 podStartE2EDuration="5.899938907s" podCreationTimestamp="2026-01-23 15:09:29 +0000 UTC" firstStartedPulling="2026-01-23 15:09:30.793092945 +0000 UTC m=+5811.815630570" lastFinishedPulling="2026-01-23 15:09:33.276850232 +0000 UTC m=+5814.299387857" observedRunningTime="2026-01-23 15:09:34.887582619 +0000 UTC m=+5815.910120254" watchObservedRunningTime="2026-01-23 15:09:34.899938907 +0000 UTC m=+5815.922476542" Jan 23 15:09:39 crc kubenswrapper[4771]: I0123 15:09:39.365204 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:39 crc kubenswrapper[4771]: I0123 15:09:39.365843 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:39 crc kubenswrapper[4771]: I0123 15:09:39.423792 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:40 crc kubenswrapper[4771]: I0123 15:09:40.001035 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:40 crc kubenswrapper[4771]: I0123 15:09:40.052751 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:41 crc kubenswrapper[4771]: I0123 15:09:41.932872 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dz4l5" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="registry-server" containerID="cri-o://4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67" gracePeriod=2 Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.228505 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:09:42 crc kubenswrapper[4771]: E0123 15:09:42.229012 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.458609 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.538534 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content\") pod \"ec2503ee-a96a-4988-81a5-9ecea89d9458\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.538594 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities\") pod \"ec2503ee-a96a-4988-81a5-9ecea89d9458\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.538789 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ntmc\" (UniqueName: \"kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc\") pod \"ec2503ee-a96a-4988-81a5-9ecea89d9458\" (UID: \"ec2503ee-a96a-4988-81a5-9ecea89d9458\") " Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.540664 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities" (OuterVolumeSpecName: "utilities") pod "ec2503ee-a96a-4988-81a5-9ecea89d9458" (UID: "ec2503ee-a96a-4988-81a5-9ecea89d9458"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.558238 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc" (OuterVolumeSpecName: "kube-api-access-6ntmc") pod "ec2503ee-a96a-4988-81a5-9ecea89d9458" (UID: "ec2503ee-a96a-4988-81a5-9ecea89d9458"). InnerVolumeSpecName "kube-api-access-6ntmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.585317 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec2503ee-a96a-4988-81a5-9ecea89d9458" (UID: "ec2503ee-a96a-4988-81a5-9ecea89d9458"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.642171 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.642480 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec2503ee-a96a-4988-81a5-9ecea89d9458-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.642490 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ntmc\" (UniqueName: \"kubernetes.io/projected/ec2503ee-a96a-4988-81a5-9ecea89d9458-kube-api-access-6ntmc\") on node \"crc\" DevicePath \"\"" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.944316 4771 generic.go:334] "Generic (PLEG): container finished" podID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerID="4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67" exitCode=0 Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.944377 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerDied","Data":"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67"} Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.944387 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dz4l5" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.944428 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dz4l5" event={"ID":"ec2503ee-a96a-4988-81a5-9ecea89d9458","Type":"ContainerDied","Data":"55b411845e8d921dcfb2c268fbce9a48a8911efefe954d8e9281033f554c5ccc"} Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.944446 4771 scope.go:117] "RemoveContainer" containerID="4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.966643 4771 scope.go:117] "RemoveContainer" containerID="c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7" Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.979335 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:42 crc kubenswrapper[4771]: I0123 15:09:42.998502 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dz4l5"] Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.016899 4771 scope.go:117] "RemoveContainer" containerID="39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.043667 4771 scope.go:117] "RemoveContainer" containerID="4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67" Jan 23 15:09:43 crc kubenswrapper[4771]: E0123 15:09:43.053605 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67\": container with ID starting with 4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67 not found: ID does not exist" containerID="4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.053661 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67"} err="failed to get container status \"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67\": rpc error: code = NotFound desc = could not find container \"4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67\": container with ID starting with 4db891ab5b817996606dffd609d6a95fb3367926b7eb4700897eb912a6440e67 not found: ID does not exist" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.053690 4771 scope.go:117] "RemoveContainer" containerID="c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7" Jan 23 15:09:43 crc kubenswrapper[4771]: E0123 15:09:43.054295 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7\": container with ID starting with c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7 not found: ID does not exist" containerID="c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.054365 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7"} err="failed to get container status \"c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7\": rpc error: code = NotFound desc = could not find container \"c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7\": container with ID starting with c3a8ce601157e966cb9c74d1af7a863eae4d6dd70b0111ea4f12800ed705ebb7 not found: ID does not exist" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.054431 4771 scope.go:117] "RemoveContainer" containerID="39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345" Jan 23 15:09:43 crc kubenswrapper[4771]: E0123 15:09:43.055057 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345\": container with ID starting with 39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345 not found: ID does not exist" containerID="39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.055093 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345"} err="failed to get container status \"39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345\": rpc error: code = NotFound desc = could not find container \"39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345\": container with ID starting with 39321868d62d113c18bc912ba4ef51cc129a214ec71767ae194dc8cea8e22345 not found: ID does not exist" Jan 23 15:09:43 crc kubenswrapper[4771]: I0123 15:09:43.240386 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" path="/var/lib/kubelet/pods/ec2503ee-a96a-4988-81a5-9ecea89d9458/volumes" Jan 23 15:09:55 crc kubenswrapper[4771]: I0123 15:09:55.228603 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:09:55 crc kubenswrapper[4771]: E0123 15:09:55.229555 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.838793 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:09:57 crc kubenswrapper[4771]: E0123 15:09:57.839819 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="extract-content" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.839841 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="extract-content" Jan 23 15:09:57 crc kubenswrapper[4771]: E0123 15:09:57.839864 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="extract-utilities" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.839872 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="extract-utilities" Jan 23 15:09:57 crc kubenswrapper[4771]: E0123 15:09:57.839903 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="registry-server" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.839911 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="registry-server" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.840247 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2503ee-a96a-4988-81a5-9ecea89d9458" containerName="registry-server" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.842232 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.860976 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.895566 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.895798 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.896025 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdnb\" (UniqueName: \"kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.998743 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.999290 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.999349 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdnb\" (UniqueName: \"kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.999475 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:57 crc kubenswrapper[4771]: I0123 15:09:57.999760 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:58 crc kubenswrapper[4771]: I0123 15:09:58.023472 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdnb\" (UniqueName: \"kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb\") pod \"redhat-marketplace-n4xqq\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:58 crc kubenswrapper[4771]: I0123 15:09:58.163293 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:09:58 crc kubenswrapper[4771]: I0123 15:09:58.672100 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:09:59 crc kubenswrapper[4771]: I0123 15:09:59.110480 4771 generic.go:334] "Generic (PLEG): container finished" podID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerID="29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d" exitCode=0 Jan 23 15:09:59 crc kubenswrapper[4771]: I0123 15:09:59.110592 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerDied","Data":"29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d"} Jan 23 15:09:59 crc kubenswrapper[4771]: I0123 15:09:59.110806 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerStarted","Data":"5a42a77e37e9528acbc604803f164fe7d1c9f4fab297463ff6996e84f0e4c404"} Jan 23 15:10:01 crc kubenswrapper[4771]: I0123 15:10:01.137755 4771 generic.go:334] "Generic (PLEG): container finished" podID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerID="62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2" exitCode=0 Jan 23 15:10:01 crc kubenswrapper[4771]: I0123 15:10:01.137944 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerDied","Data":"62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2"} Jan 23 15:10:02 crc kubenswrapper[4771]: I0123 15:10:02.151932 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerStarted","Data":"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b"} Jan 23 15:10:02 crc kubenswrapper[4771]: I0123 15:10:02.179630 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4xqq" podStartSLOduration=2.531033506 podStartE2EDuration="5.179606106s" podCreationTimestamp="2026-01-23 15:09:57 +0000 UTC" firstStartedPulling="2026-01-23 15:09:59.113043313 +0000 UTC m=+5840.135580938" lastFinishedPulling="2026-01-23 15:10:01.761615913 +0000 UTC m=+5842.784153538" observedRunningTime="2026-01-23 15:10:02.169988493 +0000 UTC m=+5843.192526138" watchObservedRunningTime="2026-01-23 15:10:02.179606106 +0000 UTC m=+5843.202143721" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.164098 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.164743 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.216304 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.229041 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:10:08 crc kubenswrapper[4771]: E0123 15:10:08.229361 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.281987 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:08 crc kubenswrapper[4771]: I0123 15:10:08.457610 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.240906 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4xqq" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="registry-server" containerID="cri-o://e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b" gracePeriod=2 Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.725837 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.800545 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content\") pod \"132114ee-6f80-4a9d-8080-4c516d57d9d5\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.800796 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdnb\" (UniqueName: \"kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb\") pod \"132114ee-6f80-4a9d-8080-4c516d57d9d5\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.800848 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities\") pod \"132114ee-6f80-4a9d-8080-4c516d57d9d5\" (UID: \"132114ee-6f80-4a9d-8080-4c516d57d9d5\") " Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.802091 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities" (OuterVolumeSpecName: "utilities") pod "132114ee-6f80-4a9d-8080-4c516d57d9d5" (UID: "132114ee-6f80-4a9d-8080-4c516d57d9d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.806982 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb" (OuterVolumeSpecName: "kube-api-access-mwdnb") pod "132114ee-6f80-4a9d-8080-4c516d57d9d5" (UID: "132114ee-6f80-4a9d-8080-4c516d57d9d5"). InnerVolumeSpecName "kube-api-access-mwdnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.825878 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "132114ee-6f80-4a9d-8080-4c516d57d9d5" (UID: "132114ee-6f80-4a9d-8080-4c516d57d9d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.903219 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdnb\" (UniqueName: \"kubernetes.io/projected/132114ee-6f80-4a9d-8080-4c516d57d9d5-kube-api-access-mwdnb\") on node \"crc\" DevicePath \"\"" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.903267 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:10:10 crc kubenswrapper[4771]: I0123 15:10:10.903277 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/132114ee-6f80-4a9d-8080-4c516d57d9d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.253325 4771 generic.go:334] "Generic (PLEG): container finished" podID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerID="e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b" exitCode=0 Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.253393 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerDied","Data":"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b"} Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.253453 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4xqq" event={"ID":"132114ee-6f80-4a9d-8080-4c516d57d9d5","Type":"ContainerDied","Data":"5a42a77e37e9528acbc604803f164fe7d1c9f4fab297463ff6996e84f0e4c404"} Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.253475 4771 scope.go:117] "RemoveContainer" containerID="e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.253675 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4xqq" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.292396 4771 scope.go:117] "RemoveContainer" containerID="62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.292545 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.302955 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4xqq"] Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.317628 4771 scope.go:117] "RemoveContainer" containerID="29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.379541 4771 scope.go:117] "RemoveContainer" containerID="e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b" Jan 23 15:10:11 crc kubenswrapper[4771]: E0123 15:10:11.379998 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b\": container with ID starting with e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b not found: ID does not exist" containerID="e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.380046 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b"} err="failed to get container status \"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b\": rpc error: code = NotFound desc = could not find container \"e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b\": container with ID starting with e4fa4d0b7a18271ca6493b65cc70c05432a0e53efc4394d8b328140e182c169b not found: ID does not exist" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.380073 4771 scope.go:117] "RemoveContainer" containerID="62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2" Jan 23 15:10:11 crc kubenswrapper[4771]: E0123 15:10:11.380382 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2\": container with ID starting with 62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2 not found: ID does not exist" containerID="62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.380426 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2"} err="failed to get container status \"62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2\": rpc error: code = NotFound desc = could not find container \"62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2\": container with ID starting with 62cedebed9e529c51951c6fcdd07cd340bbb85d6c6ce1310b013fb9521f117f2 not found: ID does not exist" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.380445 4771 scope.go:117] "RemoveContainer" containerID="29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d" Jan 23 15:10:11 crc kubenswrapper[4771]: E0123 15:10:11.380641 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d\": container with ID starting with 29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d not found: ID does not exist" containerID="29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d" Jan 23 15:10:11 crc kubenswrapper[4771]: I0123 15:10:11.380662 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d"} err="failed to get container status \"29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d\": rpc error: code = NotFound desc = could not find container \"29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d\": container with ID starting with 29b8fd5bc8bb26fc261f664cf71950bd358f9d3fcfb9d8f9bcd45a28b3f8142d not found: ID does not exist" Jan 23 15:10:13 crc kubenswrapper[4771]: I0123 15:10:13.251796 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" path="/var/lib/kubelet/pods/132114ee-6f80-4a9d-8080-4c516d57d9d5/volumes" Jan 23 15:10:19 crc kubenswrapper[4771]: I0123 15:10:19.228402 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:10:19 crc kubenswrapper[4771]: E0123 15:10:19.229294 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:10:30 crc kubenswrapper[4771]: I0123 15:10:30.228881 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:10:30 crc kubenswrapper[4771]: E0123 15:10:30.229908 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:10:43 crc kubenswrapper[4771]: I0123 15:10:43.228137 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:10:43 crc kubenswrapper[4771]: E0123 15:10:43.229041 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:10:56 crc kubenswrapper[4771]: I0123 15:10:56.229444 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:10:56 crc kubenswrapper[4771]: E0123 15:10:56.230941 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:11:11 crc kubenswrapper[4771]: I0123 15:11:11.229197 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:11:11 crc kubenswrapper[4771]: E0123 15:11:11.230034 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:11:25 crc kubenswrapper[4771]: I0123 15:11:25.229603 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:11:25 crc kubenswrapper[4771]: E0123 15:11:25.230904 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:11:36 crc kubenswrapper[4771]: I0123 15:11:36.228966 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:11:36 crc kubenswrapper[4771]: E0123 15:11:36.229710 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:11:49 crc kubenswrapper[4771]: I0123 15:11:49.236116 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:11:49 crc kubenswrapper[4771]: E0123 15:11:49.236978 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:12:00 crc kubenswrapper[4771]: I0123 15:12:00.228222 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:12:00 crc kubenswrapper[4771]: E0123 15:12:00.229091 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:12:15 crc kubenswrapper[4771]: I0123 15:12:15.229160 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:12:15 crc kubenswrapper[4771]: E0123 15:12:15.232018 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:12:30 crc kubenswrapper[4771]: I0123 15:12:30.227849 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:12:30 crc kubenswrapper[4771]: E0123 15:12:30.228806 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:12:43 crc kubenswrapper[4771]: I0123 15:12:43.229235 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:12:43 crc kubenswrapper[4771]: E0123 15:12:43.230101 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:12:55 crc kubenswrapper[4771]: I0123 15:12:55.229489 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:12:55 crc kubenswrapper[4771]: E0123 15:12:55.230176 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:13:09 crc kubenswrapper[4771]: I0123 15:13:09.235676 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:13:10 crc kubenswrapper[4771]: I0123 15:13:10.001852 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b"} Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.093755 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:01 crc kubenswrapper[4771]: E0123 15:14:01.094979 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="extract-content" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.094997 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="extract-content" Jan 23 15:14:01 crc kubenswrapper[4771]: E0123 15:14:01.095026 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="registry-server" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.095041 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="registry-server" Jan 23 15:14:01 crc kubenswrapper[4771]: E0123 15:14:01.095081 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="extract-utilities" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.095088 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="extract-utilities" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.095291 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="132114ee-6f80-4a9d-8080-4c516d57d9d5" containerName="registry-server" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.096928 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.106996 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.194012 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.194098 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vr2r\" (UniqueName: \"kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.194272 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.297486 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.297634 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.298073 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.298137 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.298210 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vr2r\" (UniqueName: \"kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.328798 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vr2r\" (UniqueName: \"kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r\") pod \"community-operators-lpcq2\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:01 crc kubenswrapper[4771]: I0123 15:14:01.488730 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:02 crc kubenswrapper[4771]: I0123 15:14:02.076779 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:02 crc kubenswrapper[4771]: I0123 15:14:02.116736 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerStarted","Data":"5aac8012b448dff8dcb0b4140f147bb67507ee19cfe16a9af357a250e84b43cf"} Jan 23 15:14:03 crc kubenswrapper[4771]: I0123 15:14:03.130832 4771 generic.go:334] "Generic (PLEG): container finished" podID="76990785-efea-4855-9c11-ac290e894022" containerID="d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99" exitCode=0 Jan 23 15:14:03 crc kubenswrapper[4771]: I0123 15:14:03.130908 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerDied","Data":"d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99"} Jan 23 15:14:05 crc kubenswrapper[4771]: I0123 15:14:05.159112 4771 generic.go:334] "Generic (PLEG): container finished" podID="76990785-efea-4855-9c11-ac290e894022" containerID="d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623" exitCode=0 Jan 23 15:14:05 crc kubenswrapper[4771]: I0123 15:14:05.159192 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerDied","Data":"d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623"} Jan 23 15:14:06 crc kubenswrapper[4771]: I0123 15:14:06.173810 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerStarted","Data":"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef"} Jan 23 15:14:06 crc kubenswrapper[4771]: I0123 15:14:06.200308 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lpcq2" podStartSLOduration=2.792100329 podStartE2EDuration="5.20028152s" podCreationTimestamp="2026-01-23 15:14:01 +0000 UTC" firstStartedPulling="2026-01-23 15:14:03.13505624 +0000 UTC m=+6084.157593865" lastFinishedPulling="2026-01-23 15:14:05.543237431 +0000 UTC m=+6086.565775056" observedRunningTime="2026-01-23 15:14:06.191081962 +0000 UTC m=+6087.213619587" watchObservedRunningTime="2026-01-23 15:14:06.20028152 +0000 UTC m=+6087.222819145" Jan 23 15:14:11 crc kubenswrapper[4771]: I0123 15:14:11.489001 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:11 crc kubenswrapper[4771]: I0123 15:14:11.489541 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:11 crc kubenswrapper[4771]: I0123 15:14:11.551915 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:12 crc kubenswrapper[4771]: I0123 15:14:12.275149 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:12 crc kubenswrapper[4771]: I0123 15:14:12.325492 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.244487 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lpcq2" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="registry-server" containerID="cri-o://8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef" gracePeriod=2 Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.799542 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.932466 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content\") pod \"76990785-efea-4855-9c11-ac290e894022\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.932547 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities\") pod \"76990785-efea-4855-9c11-ac290e894022\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.932578 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vr2r\" (UniqueName: \"kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r\") pod \"76990785-efea-4855-9c11-ac290e894022\" (UID: \"76990785-efea-4855-9c11-ac290e894022\") " Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.933744 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities" (OuterVolumeSpecName: "utilities") pod "76990785-efea-4855-9c11-ac290e894022" (UID: "76990785-efea-4855-9c11-ac290e894022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.939542 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r" (OuterVolumeSpecName: "kube-api-access-2vr2r") pod "76990785-efea-4855-9c11-ac290e894022" (UID: "76990785-efea-4855-9c11-ac290e894022"). InnerVolumeSpecName "kube-api-access-2vr2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:14:14 crc kubenswrapper[4771]: I0123 15:14:14.992001 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76990785-efea-4855-9c11-ac290e894022" (UID: "76990785-efea-4855-9c11-ac290e894022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.035233 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.035274 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76990785-efea-4855-9c11-ac290e894022-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.035287 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vr2r\" (UniqueName: \"kubernetes.io/projected/76990785-efea-4855-9c11-ac290e894022-kube-api-access-2vr2r\") on node \"crc\" DevicePath \"\"" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.255382 4771 generic.go:334] "Generic (PLEG): container finished" podID="76990785-efea-4855-9c11-ac290e894022" containerID="8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef" exitCode=0 Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.255450 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerDied","Data":"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef"} Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.255509 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lpcq2" event={"ID":"76990785-efea-4855-9c11-ac290e894022","Type":"ContainerDied","Data":"5aac8012b448dff8dcb0b4140f147bb67507ee19cfe16a9af357a250e84b43cf"} Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.255533 4771 scope.go:117] "RemoveContainer" containerID="8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.256574 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lpcq2" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.291034 4771 scope.go:117] "RemoveContainer" containerID="d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.295866 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.304807 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lpcq2"] Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.313765 4771 scope.go:117] "RemoveContainer" containerID="d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.368286 4771 scope.go:117] "RemoveContainer" containerID="8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef" Jan 23 15:14:15 crc kubenswrapper[4771]: E0123 15:14:15.368901 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef\": container with ID starting with 8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef not found: ID does not exist" containerID="8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.368934 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef"} err="failed to get container status \"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef\": rpc error: code = NotFound desc = could not find container \"8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef\": container with ID starting with 8921cc021918ea6d23c3fb68f615e682db8bfd2b4c3b3011e46e09de85e1adef not found: ID does not exist" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.368960 4771 scope.go:117] "RemoveContainer" containerID="d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623" Jan 23 15:14:15 crc kubenswrapper[4771]: E0123 15:14:15.369292 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623\": container with ID starting with d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623 not found: ID does not exist" containerID="d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.369312 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623"} err="failed to get container status \"d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623\": rpc error: code = NotFound desc = could not find container \"d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623\": container with ID starting with d607a2ec2251111b48a04ad3a62d7da876d9aada0e10ccab3c8eb7761dd64623 not found: ID does not exist" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.369329 4771 scope.go:117] "RemoveContainer" containerID="d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99" Jan 23 15:14:15 crc kubenswrapper[4771]: E0123 15:14:15.369887 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99\": container with ID starting with d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99 not found: ID does not exist" containerID="d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99" Jan 23 15:14:15 crc kubenswrapper[4771]: I0123 15:14:15.369910 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99"} err="failed to get container status \"d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99\": rpc error: code = NotFound desc = could not find container \"d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99\": container with ID starting with d569d5c6cb699b6cfe679d1c8dd9ddc1d7a06bff609235c95f809b3da36b6f99 not found: ID does not exist" Jan 23 15:14:17 crc kubenswrapper[4771]: I0123 15:14:17.239725 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76990785-efea-4855-9c11-ac290e894022" path="/var/lib/kubelet/pods/76990785-efea-4855-9c11-ac290e894022/volumes" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.150539 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx"] Jan 23 15:15:00 crc kubenswrapper[4771]: E0123 15:15:00.151767 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="extract-content" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.151787 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="extract-content" Jan 23 15:15:00 crc kubenswrapper[4771]: E0123 15:15:00.151816 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="registry-server" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.151823 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="registry-server" Jan 23 15:15:00 crc kubenswrapper[4771]: E0123 15:15:00.151855 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="extract-utilities" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.151862 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="extract-utilities" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.152076 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="76990785-efea-4855-9c11-ac290e894022" containerName="registry-server" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.152912 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.155004 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.155125 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.175336 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx"] Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.232814 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9wt\" (UniqueName: \"kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.233201 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.233338 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.335776 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9wt\" (UniqueName: \"kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.335912 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.336005 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.337745 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.343382 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.354125 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9wt\" (UniqueName: \"kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt\") pod \"collect-profiles-29486355-vkcpx\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.493044 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:00 crc kubenswrapper[4771]: I0123 15:15:00.952673 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx"] Jan 23 15:15:01 crc kubenswrapper[4771]: I0123 15:15:01.721021 4771 generic.go:334] "Generic (PLEG): container finished" podID="c0b04eff-5f50-423d-b4e4-65bb709be207" containerID="210b70a83ee277d23f1b21da702a5c5d7224015a57703a052dff0aa82fcf960a" exitCode=0 Jan 23 15:15:01 crc kubenswrapper[4771]: I0123 15:15:01.721206 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" event={"ID":"c0b04eff-5f50-423d-b4e4-65bb709be207","Type":"ContainerDied","Data":"210b70a83ee277d23f1b21da702a5c5d7224015a57703a052dff0aa82fcf960a"} Jan 23 15:15:01 crc kubenswrapper[4771]: I0123 15:15:01.721374 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" event={"ID":"c0b04eff-5f50-423d-b4e4-65bb709be207","Type":"ContainerStarted","Data":"1ad13a49524eef56cf1da9eaf7e1bb269b6977a02bdb40474c9ed1a9d1e72201"} Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.167531 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.201608 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume\") pod \"c0b04eff-5f50-423d-b4e4-65bb709be207\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.201677 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m9wt\" (UniqueName: \"kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt\") pod \"c0b04eff-5f50-423d-b4e4-65bb709be207\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.201711 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume\") pod \"c0b04eff-5f50-423d-b4e4-65bb709be207\" (UID: \"c0b04eff-5f50-423d-b4e4-65bb709be207\") " Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.202626 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0b04eff-5f50-423d-b4e4-65bb709be207" (UID: "c0b04eff-5f50-423d-b4e4-65bb709be207"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.208236 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt" (OuterVolumeSpecName: "kube-api-access-4m9wt") pod "c0b04eff-5f50-423d-b4e4-65bb709be207" (UID: "c0b04eff-5f50-423d-b4e4-65bb709be207"). InnerVolumeSpecName "kube-api-access-4m9wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.208875 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0b04eff-5f50-423d-b4e4-65bb709be207" (UID: "c0b04eff-5f50-423d-b4e4-65bb709be207"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.305671 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0b04eff-5f50-423d-b4e4-65bb709be207-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.305716 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m9wt\" (UniqueName: \"kubernetes.io/projected/c0b04eff-5f50-423d-b4e4-65bb709be207-kube-api-access-4m9wt\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.305732 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0b04eff-5f50-423d-b4e4-65bb709be207-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.741536 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" event={"ID":"c0b04eff-5f50-423d-b4e4-65bb709be207","Type":"ContainerDied","Data":"1ad13a49524eef56cf1da9eaf7e1bb269b6977a02bdb40474c9ed1a9d1e72201"} Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.741827 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad13a49524eef56cf1da9eaf7e1bb269b6977a02bdb40474c9ed1a9d1e72201" Jan 23 15:15:03 crc kubenswrapper[4771]: I0123 15:15:03.741889 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486355-vkcpx" Jan 23 15:15:04 crc kubenswrapper[4771]: I0123 15:15:04.252003 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5"] Jan 23 15:15:04 crc kubenswrapper[4771]: I0123 15:15:04.261786 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-g5kt5"] Jan 23 15:15:05 crc kubenswrapper[4771]: I0123 15:15:05.242026 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250f782e-f7d2-4bd3-9359-fe9e97d868cc" path="/var/lib/kubelet/pods/250f782e-f7d2-4bd3-9359-fe9e97d868cc/volumes" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.351005 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:25 crc kubenswrapper[4771]: E0123 15:15:25.352086 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b04eff-5f50-423d-b4e4-65bb709be207" containerName="collect-profiles" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.352104 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b04eff-5f50-423d-b4e4-65bb709be207" containerName="collect-profiles" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.355707 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b04eff-5f50-423d-b4e4-65bb709be207" containerName="collect-profiles" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.357925 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.394919 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.447828 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.447965 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7g95\" (UniqueName: \"kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.447998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.551066 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7g95\" (UniqueName: \"kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.551127 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.551273 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.551827 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.551871 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.578459 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7g95\" (UniqueName: \"kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95\") pod \"redhat-operators-9vx2s\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:25 crc kubenswrapper[4771]: I0123 15:15:25.682827 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:26 crc kubenswrapper[4771]: I0123 15:15:26.250027 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:26 crc kubenswrapper[4771]: I0123 15:15:26.983943 4771 generic.go:334] "Generic (PLEG): container finished" podID="53f2fb4d-a5ca-4470-9741-88614051458f" containerID="77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc" exitCode=0 Jan 23 15:15:26 crc kubenswrapper[4771]: I0123 15:15:26.984313 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerDied","Data":"77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc"} Jan 23 15:15:26 crc kubenswrapper[4771]: I0123 15:15:26.984343 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerStarted","Data":"ea80f46f0dfcec2cfd388e1d707e1bc2cd06b59494a8766fafbf813dc1841624"} Jan 23 15:15:26 crc kubenswrapper[4771]: I0123 15:15:26.989203 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 15:15:27 crc kubenswrapper[4771]: I0123 15:15:27.996111 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerStarted","Data":"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e"} Jan 23 15:15:30 crc kubenswrapper[4771]: I0123 15:15:30.311962 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:15:30 crc kubenswrapper[4771]: I0123 15:15:30.312984 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:15:31 crc kubenswrapper[4771]: I0123 15:15:31.054098 4771 generic.go:334] "Generic (PLEG): container finished" podID="53f2fb4d-a5ca-4470-9741-88614051458f" containerID="1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e" exitCode=0 Jan 23 15:15:31 crc kubenswrapper[4771]: I0123 15:15:31.054158 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerDied","Data":"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e"} Jan 23 15:15:32 crc kubenswrapper[4771]: I0123 15:15:32.069862 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerStarted","Data":"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50"} Jan 23 15:15:32 crc kubenswrapper[4771]: I0123 15:15:32.097960 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9vx2s" podStartSLOduration=2.626903484 podStartE2EDuration="7.097926928s" podCreationTimestamp="2026-01-23 15:15:25 +0000 UTC" firstStartedPulling="2026-01-23 15:15:26.988976076 +0000 UTC m=+6168.011513701" lastFinishedPulling="2026-01-23 15:15:31.45999952 +0000 UTC m=+6172.482537145" observedRunningTime="2026-01-23 15:15:32.088742179 +0000 UTC m=+6173.111279804" watchObservedRunningTime="2026-01-23 15:15:32.097926928 +0000 UTC m=+6173.120464553" Jan 23 15:15:35 crc kubenswrapper[4771]: I0123 15:15:35.683081 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:35 crc kubenswrapper[4771]: I0123 15:15:35.683468 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:36 crc kubenswrapper[4771]: I0123 15:15:36.732733 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9vx2s" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="registry-server" probeResult="failure" output=< Jan 23 15:15:36 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 15:15:36 crc kubenswrapper[4771]: > Jan 23 15:15:45 crc kubenswrapper[4771]: I0123 15:15:45.733909 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:45 crc kubenswrapper[4771]: I0123 15:15:45.793340 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:46 crc kubenswrapper[4771]: I0123 15:15:46.926739 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:47 crc kubenswrapper[4771]: I0123 15:15:47.218626 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9vx2s" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="registry-server" containerID="cri-o://72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50" gracePeriod=2 Jan 23 15:15:47 crc kubenswrapper[4771]: I0123 15:15:47.842353 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:47.999940 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities\") pod \"53f2fb4d-a5ca-4470-9741-88614051458f\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.001212 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities" (OuterVolumeSpecName: "utilities") pod "53f2fb4d-a5ca-4470-9741-88614051458f" (UID: "53f2fb4d-a5ca-4470-9741-88614051458f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.001499 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content\") pod \"53f2fb4d-a5ca-4470-9741-88614051458f\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.001695 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7g95\" (UniqueName: \"kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95\") pod \"53f2fb4d-a5ca-4470-9741-88614051458f\" (UID: \"53f2fb4d-a5ca-4470-9741-88614051458f\") " Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.002327 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.011457 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95" (OuterVolumeSpecName: "kube-api-access-v7g95") pod "53f2fb4d-a5ca-4470-9741-88614051458f" (UID: "53f2fb4d-a5ca-4470-9741-88614051458f"). InnerVolumeSpecName "kube-api-access-v7g95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.105426 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7g95\" (UniqueName: \"kubernetes.io/projected/53f2fb4d-a5ca-4470-9741-88614051458f-kube-api-access-v7g95\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.153486 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53f2fb4d-a5ca-4470-9741-88614051458f" (UID: "53f2fb4d-a5ca-4470-9741-88614051458f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.207737 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53f2fb4d-a5ca-4470-9741-88614051458f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.233059 4771 generic.go:334] "Generic (PLEG): container finished" podID="53f2fb4d-a5ca-4470-9741-88614051458f" containerID="72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50" exitCode=0 Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.233100 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerDied","Data":"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50"} Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.233127 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9vx2s" event={"ID":"53f2fb4d-a5ca-4470-9741-88614051458f","Type":"ContainerDied","Data":"ea80f46f0dfcec2cfd388e1d707e1bc2cd06b59494a8766fafbf813dc1841624"} Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.233145 4771 scope.go:117] "RemoveContainer" containerID="72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.233285 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9vx2s" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.264505 4771 scope.go:117] "RemoveContainer" containerID="1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.276280 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.287103 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9vx2s"] Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.297445 4771 scope.go:117] "RemoveContainer" containerID="77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.357677 4771 scope.go:117] "RemoveContainer" containerID="72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50" Jan 23 15:15:48 crc kubenswrapper[4771]: E0123 15:15:48.358343 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50\": container with ID starting with 72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50 not found: ID does not exist" containerID="72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.358428 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50"} err="failed to get container status \"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50\": rpc error: code = NotFound desc = could not find container \"72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50\": container with ID starting with 72b516957f43ffd8706f164eaf1a685d929c97afc8cc45d63d28a85801538c50 not found: ID does not exist" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.358463 4771 scope.go:117] "RemoveContainer" containerID="1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e" Jan 23 15:15:48 crc kubenswrapper[4771]: E0123 15:15:48.358914 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e\": container with ID starting with 1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e not found: ID does not exist" containerID="1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.358960 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e"} err="failed to get container status \"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e\": rpc error: code = NotFound desc = could not find container \"1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e\": container with ID starting with 1bd03919d32cc7c1586d5a9b2945d797f5576efdeb9f21ce0f1a42eef7585b1e not found: ID does not exist" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.358982 4771 scope.go:117] "RemoveContainer" containerID="77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc" Jan 23 15:15:48 crc kubenswrapper[4771]: E0123 15:15:48.359252 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc\": container with ID starting with 77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc not found: ID does not exist" containerID="77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc" Jan 23 15:15:48 crc kubenswrapper[4771]: I0123 15:15:48.359273 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc"} err="failed to get container status \"77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc\": rpc error: code = NotFound desc = could not find container \"77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc\": container with ID starting with 77d7cf2835dac0150e2bcf58302003a99c49ebce34d43f811916bb46c030e2cc not found: ID does not exist" Jan 23 15:15:49 crc kubenswrapper[4771]: I0123 15:15:49.240766 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" path="/var/lib/kubelet/pods/53f2fb4d-a5ca-4470-9741-88614051458f/volumes" Jan 23 15:16:00 crc kubenswrapper[4771]: I0123 15:16:00.311794 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:16:00 crc kubenswrapper[4771]: I0123 15:16:00.312354 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:16:01 crc kubenswrapper[4771]: I0123 15:16:01.920618 4771 scope.go:117] "RemoveContainer" containerID="da1d11faf4a6ff6d35b8206148ec4b43acfc4c00d4e2e445ec8099bacc0365b1" Jan 23 15:16:30 crc kubenswrapper[4771]: I0123 15:16:30.312121 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:16:30 crc kubenswrapper[4771]: I0123 15:16:30.313598 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:16:30 crc kubenswrapper[4771]: I0123 15:16:30.313717 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:16:30 crc kubenswrapper[4771]: I0123 15:16:30.314687 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:16:30 crc kubenswrapper[4771]: I0123 15:16:30.314823 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b" gracePeriod=600 Jan 23 15:16:31 crc kubenswrapper[4771]: I0123 15:16:31.288725 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b" exitCode=0 Jan 23 15:16:31 crc kubenswrapper[4771]: I0123 15:16:31.288803 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b"} Jan 23 15:16:31 crc kubenswrapper[4771]: I0123 15:16:31.289429 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b"} Jan 23 15:16:31 crc kubenswrapper[4771]: I0123 15:16:31.289460 4771 scope.go:117] "RemoveContainer" containerID="8cb1eab56bbd8b3995d64a0fc569ad62cb01fd80d3ac0abc1ae4957f53ad50bf" Jan 23 15:18:30 crc kubenswrapper[4771]: I0123 15:18:30.312300 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:18:30 crc kubenswrapper[4771]: I0123 15:18:30.313227 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:19:00 crc kubenswrapper[4771]: I0123 15:19:00.312236 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:19:00 crc kubenswrapper[4771]: I0123 15:19:00.313558 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:19:30 crc kubenswrapper[4771]: I0123 15:19:30.311484 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:19:30 crc kubenswrapper[4771]: I0123 15:19:30.312104 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:19:30 crc kubenswrapper[4771]: I0123 15:19:30.312166 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:19:30 crc kubenswrapper[4771]: I0123 15:19:30.313206 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:19:30 crc kubenswrapper[4771]: I0123 15:19:30.313268 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" gracePeriod=600 Jan 23 15:19:30 crc kubenswrapper[4771]: E0123 15:19:30.453111 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:19:31 crc kubenswrapper[4771]: I0123 15:19:31.112329 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" exitCode=0 Jan 23 15:19:31 crc kubenswrapper[4771]: I0123 15:19:31.112426 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b"} Jan 23 15:19:31 crc kubenswrapper[4771]: I0123 15:19:31.112699 4771 scope.go:117] "RemoveContainer" containerID="8e8d04c6fd695cf8b72be6f1280e785b4c682b1e6c4ada81fb9881a0b828484b" Jan 23 15:19:31 crc kubenswrapper[4771]: I0123 15:19:31.113386 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:19:31 crc kubenswrapper[4771]: E0123 15:19:31.113696 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:19:43 crc kubenswrapper[4771]: I0123 15:19:43.228208 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:19:43 crc kubenswrapper[4771]: E0123 15:19:43.229231 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:19:58 crc kubenswrapper[4771]: I0123 15:19:58.229183 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:19:58 crc kubenswrapper[4771]: E0123 15:19:58.230114 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:20:10 crc kubenswrapper[4771]: I0123 15:20:10.227982 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:20:10 crc kubenswrapper[4771]: E0123 15:20:10.228807 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:20:25 crc kubenswrapper[4771]: I0123 15:20:25.229445 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:20:25 crc kubenswrapper[4771]: E0123 15:20:25.230254 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:20:39 crc kubenswrapper[4771]: I0123 15:20:39.235462 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:20:39 crc kubenswrapper[4771]: E0123 15:20:39.236157 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.991010 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:20:47 crc kubenswrapper[4771]: E0123 15:20:47.992026 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="extract-content" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.992041 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="extract-content" Jan 23 15:20:47 crc kubenswrapper[4771]: E0123 15:20:47.992067 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="registry-server" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.992073 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="registry-server" Jan 23 15:20:47 crc kubenswrapper[4771]: E0123 15:20:47.992094 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="extract-utilities" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.992100 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="extract-utilities" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.992311 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f2fb4d-a5ca-4470-9741-88614051458f" containerName="registry-server" Jan 23 15:20:47 crc kubenswrapper[4771]: I0123 15:20:47.994098 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.015780 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.108618 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.108998 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.109082 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g66jj\" (UniqueName: \"kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.212064 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.212153 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g66jj\" (UniqueName: \"kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.212207 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.212730 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.212753 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.240560 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g66jj\" (UniqueName: \"kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj\") pod \"redhat-marketplace-m5w7b\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.323261 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.805793 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:20:48 crc kubenswrapper[4771]: I0123 15:20:48.885266 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerStarted","Data":"cdb23e16c1cd6847a64a3c3ec0e250bef3a4a19e8f38f0b6824140109ed1f41f"} Jan 23 15:20:49 crc kubenswrapper[4771]: I0123 15:20:49.898444 4771 generic.go:334] "Generic (PLEG): container finished" podID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerID="ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171" exitCode=0 Jan 23 15:20:49 crc kubenswrapper[4771]: I0123 15:20:49.898629 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerDied","Data":"ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171"} Jan 23 15:20:49 crc kubenswrapper[4771]: I0123 15:20:49.901066 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 15:20:50 crc kubenswrapper[4771]: I0123 15:20:50.915916 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerStarted","Data":"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64"} Jan 23 15:20:51 crc kubenswrapper[4771]: I0123 15:20:51.929489 4771 generic.go:334] "Generic (PLEG): container finished" podID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerID="4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64" exitCode=0 Jan 23 15:20:51 crc kubenswrapper[4771]: I0123 15:20:51.929565 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerDied","Data":"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64"} Jan 23 15:20:52 crc kubenswrapper[4771]: I0123 15:20:52.228931 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:20:52 crc kubenswrapper[4771]: E0123 15:20:52.229184 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:20:53 crc kubenswrapper[4771]: I0123 15:20:53.947914 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerStarted","Data":"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d"} Jan 23 15:20:53 crc kubenswrapper[4771]: I0123 15:20:53.974265 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5w7b" podStartSLOduration=4.122835766 podStartE2EDuration="6.974240349s" podCreationTimestamp="2026-01-23 15:20:47 +0000 UTC" firstStartedPulling="2026-01-23 15:20:49.900680816 +0000 UTC m=+6490.923218481" lastFinishedPulling="2026-01-23 15:20:52.752085439 +0000 UTC m=+6493.774623064" observedRunningTime="2026-01-23 15:20:53.964134461 +0000 UTC m=+6494.986672096" watchObservedRunningTime="2026-01-23 15:20:53.974240349 +0000 UTC m=+6494.996777984" Jan 23 15:20:58 crc kubenswrapper[4771]: I0123 15:20:58.324173 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:58 crc kubenswrapper[4771]: I0123 15:20:58.325593 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:58 crc kubenswrapper[4771]: I0123 15:20:58.386244 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:59 crc kubenswrapper[4771]: I0123 15:20:59.045797 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:20:59 crc kubenswrapper[4771]: I0123 15:20:59.091823 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.018286 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m5w7b" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="registry-server" containerID="cri-o://5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d" gracePeriod=2 Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.523785 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.535004 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g66jj\" (UniqueName: \"kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj\") pod \"62da2e45-2a63-486b-af4d-82fe1ec053b6\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.535084 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities\") pod \"62da2e45-2a63-486b-af4d-82fe1ec053b6\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.535110 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content\") pod \"62da2e45-2a63-486b-af4d-82fe1ec053b6\" (UID: \"62da2e45-2a63-486b-af4d-82fe1ec053b6\") " Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.536018 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities" (OuterVolumeSpecName: "utilities") pod "62da2e45-2a63-486b-af4d-82fe1ec053b6" (UID: "62da2e45-2a63-486b-af4d-82fe1ec053b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.538198 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.550758 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj" (OuterVolumeSpecName: "kube-api-access-g66jj") pod "62da2e45-2a63-486b-af4d-82fe1ec053b6" (UID: "62da2e45-2a63-486b-af4d-82fe1ec053b6"). InnerVolumeSpecName "kube-api-access-g66jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.567626 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62da2e45-2a63-486b-af4d-82fe1ec053b6" (UID: "62da2e45-2a63-486b-af4d-82fe1ec053b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.640578 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g66jj\" (UniqueName: \"kubernetes.io/projected/62da2e45-2a63-486b-af4d-82fe1ec053b6-kube-api-access-g66jj\") on node \"crc\" DevicePath \"\"" Jan 23 15:21:01 crc kubenswrapper[4771]: I0123 15:21:01.640623 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62da2e45-2a63-486b-af4d-82fe1ec053b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.035704 4771 generic.go:334] "Generic (PLEG): container finished" podID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerID="5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d" exitCode=0 Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.035749 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5w7b" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.035770 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerDied","Data":"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d"} Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.036043 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5w7b" event={"ID":"62da2e45-2a63-486b-af4d-82fe1ec053b6","Type":"ContainerDied","Data":"cdb23e16c1cd6847a64a3c3ec0e250bef3a4a19e8f38f0b6824140109ed1f41f"} Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.036061 4771 scope.go:117] "RemoveContainer" containerID="5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.063663 4771 scope.go:117] "RemoveContainer" containerID="4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.094520 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.098055 4771 scope.go:117] "RemoveContainer" containerID="ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.115493 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5w7b"] Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.134519 4771 scope.go:117] "RemoveContainer" containerID="5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d" Jan 23 15:21:02 crc kubenswrapper[4771]: E0123 15:21:02.135065 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d\": container with ID starting with 5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d not found: ID does not exist" containerID="5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.135109 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d"} err="failed to get container status \"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d\": rpc error: code = NotFound desc = could not find container \"5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d\": container with ID starting with 5ddb1b6e13ce1aded761eba3f1fd47b5d3f068e40331502b8c160dc7b7912c5d not found: ID does not exist" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.135137 4771 scope.go:117] "RemoveContainer" containerID="4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64" Jan 23 15:21:02 crc kubenswrapper[4771]: E0123 15:21:02.135594 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64\": container with ID starting with 4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64 not found: ID does not exist" containerID="4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.135641 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64"} err="failed to get container status \"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64\": rpc error: code = NotFound desc = could not find container \"4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64\": container with ID starting with 4655f56e331f26e9622abb0bbc5cdfb2c75c138c0f1ef4f13fb370636de54e64 not found: ID does not exist" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.135657 4771 scope.go:117] "RemoveContainer" containerID="ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171" Jan 23 15:21:02 crc kubenswrapper[4771]: E0123 15:21:02.135963 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171\": container with ID starting with ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171 not found: ID does not exist" containerID="ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171" Jan 23 15:21:02 crc kubenswrapper[4771]: I0123 15:21:02.135991 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171"} err="failed to get container status \"ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171\": rpc error: code = NotFound desc = could not find container \"ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171\": container with ID starting with ed8ba3948b4926abbf77ee7c82d717a805e21fa9f3c072116f7b8001eac2f171 not found: ID does not exist" Jan 23 15:21:03 crc kubenswrapper[4771]: I0123 15:21:03.246732 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" path="/var/lib/kubelet/pods/62da2e45-2a63-486b-af4d-82fe1ec053b6/volumes" Jan 23 15:21:07 crc kubenswrapper[4771]: I0123 15:21:07.231120 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:21:07 crc kubenswrapper[4771]: E0123 15:21:07.231823 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:21:21 crc kubenswrapper[4771]: I0123 15:21:21.228089 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:21:21 crc kubenswrapper[4771]: E0123 15:21:21.228789 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:21:36 crc kubenswrapper[4771]: I0123 15:21:36.229857 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:21:36 crc kubenswrapper[4771]: E0123 15:21:36.230900 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:21:49 crc kubenswrapper[4771]: I0123 15:21:49.235946 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:21:49 crc kubenswrapper[4771]: E0123 15:21:49.237940 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:22:04 crc kubenswrapper[4771]: I0123 15:22:04.229159 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:22:04 crc kubenswrapper[4771]: E0123 15:22:04.230008 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:22:19 crc kubenswrapper[4771]: I0123 15:22:19.235332 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:22:19 crc kubenswrapper[4771]: E0123 15:22:19.236241 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:22:30 crc kubenswrapper[4771]: I0123 15:22:30.228561 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:22:30 crc kubenswrapper[4771]: E0123 15:22:30.229306 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:22:45 crc kubenswrapper[4771]: I0123 15:22:45.228584 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:22:45 crc kubenswrapper[4771]: E0123 15:22:45.229597 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:22:58 crc kubenswrapper[4771]: I0123 15:22:58.229304 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:22:58 crc kubenswrapper[4771]: E0123 15:22:58.231024 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:23:13 crc kubenswrapper[4771]: I0123 15:23:13.229674 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:23:13 crc kubenswrapper[4771]: E0123 15:23:13.230448 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:23:25 crc kubenswrapper[4771]: I0123 15:23:25.229079 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:23:25 crc kubenswrapper[4771]: E0123 15:23:25.230194 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:23:40 crc kubenswrapper[4771]: I0123 15:23:40.229115 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:23:40 crc kubenswrapper[4771]: E0123 15:23:40.230173 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:23:51 crc kubenswrapper[4771]: I0123 15:23:51.229547 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:23:51 crc kubenswrapper[4771]: E0123 15:23:51.230778 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:24:05 crc kubenswrapper[4771]: I0123 15:24:05.228830 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:24:05 crc kubenswrapper[4771]: E0123 15:24:05.229812 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.662106 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:14 crc kubenswrapper[4771]: E0123 15:24:14.663253 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="extract-content" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.663272 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="extract-content" Jan 23 15:24:14 crc kubenswrapper[4771]: E0123 15:24:14.663294 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="registry-server" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.663302 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="registry-server" Jan 23 15:24:14 crc kubenswrapper[4771]: E0123 15:24:14.663318 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="extract-utilities" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.663325 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="extract-utilities" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.663582 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="62da2e45-2a63-486b-af4d-82fe1ec053b6" containerName="registry-server" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.665659 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.681942 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.752300 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.752690 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45rr\" (UniqueName: \"kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.752720 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.855323 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k45rr\" (UniqueName: \"kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.855390 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.855573 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.856112 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.856272 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:14 crc kubenswrapper[4771]: I0123 15:24:14.881746 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k45rr\" (UniqueName: \"kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr\") pod \"community-operators-sfgcd\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:15 crc kubenswrapper[4771]: I0123 15:24:15.029468 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:15 crc kubenswrapper[4771]: I0123 15:24:15.558378 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:15 crc kubenswrapper[4771]: E0123 15:24:15.978261 4771 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f9a8a1_1cc6_4392_95d2_637ae1a47e5d.slice/crio-d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83f9a8a1_1cc6_4392_95d2_637ae1a47e5d.slice/crio-conmon-d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8.scope\": RecentStats: unable to find data in memory cache]" Jan 23 15:24:16 crc kubenswrapper[4771]: I0123 15:24:16.138785 4771 generic.go:334] "Generic (PLEG): container finished" podID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerID="d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8" exitCode=0 Jan 23 15:24:16 crc kubenswrapper[4771]: I0123 15:24:16.138833 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerDied","Data":"d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8"} Jan 23 15:24:16 crc kubenswrapper[4771]: I0123 15:24:16.138880 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerStarted","Data":"ce0c7a14147cee8c733697abe319e23a216743d277938fcb31f924d3de779494"} Jan 23 15:24:17 crc kubenswrapper[4771]: I0123 15:24:17.227710 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:24:17 crc kubenswrapper[4771]: E0123 15:24:17.228803 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:24:18 crc kubenswrapper[4771]: I0123 15:24:18.167875 4771 generic.go:334] "Generic (PLEG): container finished" podID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerID="afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542" exitCode=0 Jan 23 15:24:18 crc kubenswrapper[4771]: I0123 15:24:18.167948 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerDied","Data":"afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542"} Jan 23 15:24:19 crc kubenswrapper[4771]: I0123 15:24:19.182598 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerStarted","Data":"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211"} Jan 23 15:24:19 crc kubenswrapper[4771]: I0123 15:24:19.202705 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sfgcd" podStartSLOduration=2.692686366 podStartE2EDuration="5.202677118s" podCreationTimestamp="2026-01-23 15:24:14 +0000 UTC" firstStartedPulling="2026-01-23 15:24:16.144665703 +0000 UTC m=+6697.167203328" lastFinishedPulling="2026-01-23 15:24:18.654656465 +0000 UTC m=+6699.677194080" observedRunningTime="2026-01-23 15:24:19.198834468 +0000 UTC m=+6700.221372093" watchObservedRunningTime="2026-01-23 15:24:19.202677118 +0000 UTC m=+6700.225214753" Jan 23 15:24:25 crc kubenswrapper[4771]: I0123 15:24:25.029909 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:25 crc kubenswrapper[4771]: I0123 15:24:25.030508 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:25 crc kubenswrapper[4771]: I0123 15:24:25.093681 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:25 crc kubenswrapper[4771]: I0123 15:24:25.331814 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:25 crc kubenswrapper[4771]: I0123 15:24:25.423218 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.275739 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sfgcd" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="registry-server" containerID="cri-o://f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211" gracePeriod=2 Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.848191 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.990820 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content\") pod \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.991167 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities\") pod \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.991338 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k45rr\" (UniqueName: \"kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr\") pod \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\" (UID: \"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d\") " Jan 23 15:24:27 crc kubenswrapper[4771]: I0123 15:24:27.994156 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities" (OuterVolumeSpecName: "utilities") pod "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" (UID: "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.004844 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr" (OuterVolumeSpecName: "kube-api-access-k45rr") pod "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" (UID: "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d"). InnerVolumeSpecName "kube-api-access-k45rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.067297 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" (UID: "83f9a8a1-1cc6-4392-95d2-637ae1a47e5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.094282 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k45rr\" (UniqueName: \"kubernetes.io/projected/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-kube-api-access-k45rr\") on node \"crc\" DevicePath \"\"" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.094338 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.094351 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.293306 4771 generic.go:334] "Generic (PLEG): container finished" podID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerID="f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211" exitCode=0 Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.294323 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerDied","Data":"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211"} Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.294386 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfgcd" event={"ID":"83f9a8a1-1cc6-4392-95d2-637ae1a47e5d","Type":"ContainerDied","Data":"ce0c7a14147cee8c733697abe319e23a216743d277938fcb31f924d3de779494"} Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.294426 4771 scope.go:117] "RemoveContainer" containerID="f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.294701 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfgcd" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.320739 4771 scope.go:117] "RemoveContainer" containerID="afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.354358 4771 scope.go:117] "RemoveContainer" containerID="d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.357483 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.369096 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sfgcd"] Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.416215 4771 scope.go:117] "RemoveContainer" containerID="f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211" Jan 23 15:24:28 crc kubenswrapper[4771]: E0123 15:24:28.416800 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211\": container with ID starting with f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211 not found: ID does not exist" containerID="f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.416853 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211"} err="failed to get container status \"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211\": rpc error: code = NotFound desc = could not find container \"f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211\": container with ID starting with f396d16c1f4f7a7cfdd47faf989a783072d36d154638b2e322577552339e6211 not found: ID does not exist" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.416884 4771 scope.go:117] "RemoveContainer" containerID="afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542" Jan 23 15:24:28 crc kubenswrapper[4771]: E0123 15:24:28.417308 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542\": container with ID starting with afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542 not found: ID does not exist" containerID="afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.417369 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542"} err="failed to get container status \"afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542\": rpc error: code = NotFound desc = could not find container \"afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542\": container with ID starting with afbf162defc6def9221cd4faf72622d2d9588f80be6a54aa1e57187761673542 not found: ID does not exist" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.417405 4771 scope.go:117] "RemoveContainer" containerID="d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8" Jan 23 15:24:28 crc kubenswrapper[4771]: E0123 15:24:28.417978 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8\": container with ID starting with d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8 not found: ID does not exist" containerID="d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8" Jan 23 15:24:28 crc kubenswrapper[4771]: I0123 15:24:28.418004 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8"} err="failed to get container status \"d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8\": rpc error: code = NotFound desc = could not find container \"d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8\": container with ID starting with d5f265e0cc4b7c5dc5f2bf8cac4a7b022ec7c1a6187d485039d2e4009fc10de8 not found: ID does not exist" Jan 23 15:24:29 crc kubenswrapper[4771]: I0123 15:24:29.242035 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" path="/var/lib/kubelet/pods/83f9a8a1-1cc6-4392-95d2-637ae1a47e5d/volumes" Jan 23 15:24:31 crc kubenswrapper[4771]: I0123 15:24:31.228340 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:24:32 crc kubenswrapper[4771]: I0123 15:24:32.344686 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab"} Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.691857 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:02 crc kubenswrapper[4771]: E0123 15:25:02.694249 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="registry-server" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.694289 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="registry-server" Jan 23 15:25:02 crc kubenswrapper[4771]: E0123 15:25:02.694309 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="extract-content" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.694319 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="extract-content" Jan 23 15:25:02 crc kubenswrapper[4771]: E0123 15:25:02.694345 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="extract-utilities" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.694357 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="extract-utilities" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.694690 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="83f9a8a1-1cc6-4392-95d2-637ae1a47e5d" containerName="registry-server" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.697161 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.704023 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.761013 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.761282 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.761391 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgrq\" (UniqueName: \"kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.863055 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.863201 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.863272 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lgrq\" (UniqueName: \"kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.863727 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.863927 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:02 crc kubenswrapper[4771]: I0123 15:25:02.885396 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lgrq\" (UniqueName: \"kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq\") pod \"certified-operators-vmzdf\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:03 crc kubenswrapper[4771]: I0123 15:25:03.021573 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:03 crc kubenswrapper[4771]: I0123 15:25:03.547342 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:03 crc kubenswrapper[4771]: I0123 15:25:03.674876 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerStarted","Data":"69c7f836c1110da349aebeff772a5b44943742333cc78f75f0ef650c112b12a9"} Jan 23 15:25:04 crc kubenswrapper[4771]: I0123 15:25:04.686243 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerID="1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c" exitCode=0 Jan 23 15:25:04 crc kubenswrapper[4771]: I0123 15:25:04.686345 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerDied","Data":"1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c"} Jan 23 15:25:05 crc kubenswrapper[4771]: I0123 15:25:05.705007 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerStarted","Data":"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42"} Jan 23 15:25:06 crc kubenswrapper[4771]: I0123 15:25:06.715920 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerID="aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42" exitCode=0 Jan 23 15:25:06 crc kubenswrapper[4771]: I0123 15:25:06.716012 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerDied","Data":"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42"} Jan 23 15:25:07 crc kubenswrapper[4771]: I0123 15:25:07.727741 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerStarted","Data":"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db"} Jan 23 15:25:07 crc kubenswrapper[4771]: I0123 15:25:07.758903 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vmzdf" podStartSLOduration=3.139355372 podStartE2EDuration="5.758858464s" podCreationTimestamp="2026-01-23 15:25:02 +0000 UTC" firstStartedPulling="2026-01-23 15:25:04.689136792 +0000 UTC m=+6745.711674417" lastFinishedPulling="2026-01-23 15:25:07.308639884 +0000 UTC m=+6748.331177509" observedRunningTime="2026-01-23 15:25:07.746954659 +0000 UTC m=+6748.769492294" watchObservedRunningTime="2026-01-23 15:25:07.758858464 +0000 UTC m=+6748.781396089" Jan 23 15:25:13 crc kubenswrapper[4771]: I0123 15:25:13.022127 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:13 crc kubenswrapper[4771]: I0123 15:25:13.022821 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:13 crc kubenswrapper[4771]: I0123 15:25:13.075974 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:13 crc kubenswrapper[4771]: I0123 15:25:13.853309 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:13 crc kubenswrapper[4771]: I0123 15:25:13.915471 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:15 crc kubenswrapper[4771]: I0123 15:25:15.823020 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vmzdf" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="registry-server" containerID="cri-o://002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db" gracePeriod=2 Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.325249 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.481895 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content\") pod \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.482045 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lgrq\" (UniqueName: \"kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq\") pod \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.482283 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities\") pod \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\" (UID: \"6a1b702d-3e38-4329-b3bf-25ee8a9104ef\") " Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.483261 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities" (OuterVolumeSpecName: "utilities") pod "6a1b702d-3e38-4329-b3bf-25ee8a9104ef" (UID: "6a1b702d-3e38-4329-b3bf-25ee8a9104ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.491705 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq" (OuterVolumeSpecName: "kube-api-access-8lgrq") pod "6a1b702d-3e38-4329-b3bf-25ee8a9104ef" (UID: "6a1b702d-3e38-4329-b3bf-25ee8a9104ef"). InnerVolumeSpecName "kube-api-access-8lgrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.532621 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a1b702d-3e38-4329-b3bf-25ee8a9104ef" (UID: "6a1b702d-3e38-4329-b3bf-25ee8a9104ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.584950 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.584989 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.585002 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lgrq\" (UniqueName: \"kubernetes.io/projected/6a1b702d-3e38-4329-b3bf-25ee8a9104ef-kube-api-access-8lgrq\") on node \"crc\" DevicePath \"\"" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.841085 4771 generic.go:334] "Generic (PLEG): container finished" podID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerID="002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db" exitCode=0 Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.841178 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vmzdf" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.841203 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerDied","Data":"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db"} Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.841544 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vmzdf" event={"ID":"6a1b702d-3e38-4329-b3bf-25ee8a9104ef","Type":"ContainerDied","Data":"69c7f836c1110da349aebeff772a5b44943742333cc78f75f0ef650c112b12a9"} Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.841590 4771 scope.go:117] "RemoveContainer" containerID="002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.862185 4771 scope.go:117] "RemoveContainer" containerID="aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.880352 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.892330 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vmzdf"] Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.907261 4771 scope.go:117] "RemoveContainer" containerID="1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.949911 4771 scope.go:117] "RemoveContainer" containerID="002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db" Jan 23 15:25:16 crc kubenswrapper[4771]: E0123 15:25:16.950500 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db\": container with ID starting with 002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db not found: ID does not exist" containerID="002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.950533 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db"} err="failed to get container status \"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db\": rpc error: code = NotFound desc = could not find container \"002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db\": container with ID starting with 002aaad206a5f8d4f67de94ca8065cdcd5dcd5b01bb07b2b8273e1def4a067db not found: ID does not exist" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.950554 4771 scope.go:117] "RemoveContainer" containerID="aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42" Jan 23 15:25:16 crc kubenswrapper[4771]: E0123 15:25:16.950878 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42\": container with ID starting with aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42 not found: ID does not exist" containerID="aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.950924 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42"} err="failed to get container status \"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42\": rpc error: code = NotFound desc = could not find container \"aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42\": container with ID starting with aad033f765987bb8081069d7678626915d9370e04acfbefebed0aa9ef397fd42 not found: ID does not exist" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.950942 4771 scope.go:117] "RemoveContainer" containerID="1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c" Jan 23 15:25:16 crc kubenswrapper[4771]: E0123 15:25:16.951867 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c\": container with ID starting with 1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c not found: ID does not exist" containerID="1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c" Jan 23 15:25:16 crc kubenswrapper[4771]: I0123 15:25:16.951945 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c"} err="failed to get container status \"1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c\": rpc error: code = NotFound desc = could not find container \"1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c\": container with ID starting with 1cff56acedd21b1affff47d4824da02c61a8efd2e5ebc428f78436a09ddb546c not found: ID does not exist" Jan 23 15:25:17 crc kubenswrapper[4771]: I0123 15:25:17.239249 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" path="/var/lib/kubelet/pods/6a1b702d-3e38-4329-b3bf-25ee8a9104ef/volumes" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.616443 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2vtmc/must-gather-rffk9"] Jan 23 15:26:45 crc kubenswrapper[4771]: E0123 15:26:45.617689 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="extract-content" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.617705 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="extract-content" Jan 23 15:26:45 crc kubenswrapper[4771]: E0123 15:26:45.617743 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="extract-utilities" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.617750 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="extract-utilities" Jan 23 15:26:45 crc kubenswrapper[4771]: E0123 15:26:45.617780 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="registry-server" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.617786 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="registry-server" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.618013 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1b702d-3e38-4329-b3bf-25ee8a9104ef" containerName="registry-server" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.619225 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.621581 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2vtmc"/"openshift-service-ca.crt" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.626026 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2vtmc"/"kube-root-ca.crt" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.626371 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2vtmc"/"default-dockercfg-kq658" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.636492 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2vtmc/must-gather-rffk9"] Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.709055 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.709328 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvw7\" (UniqueName: \"kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.811322 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.811523 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxvw7\" (UniqueName: \"kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.812433 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.842144 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxvw7\" (UniqueName: \"kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7\") pod \"must-gather-rffk9\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:45 crc kubenswrapper[4771]: I0123 15:26:45.940396 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:26:46 crc kubenswrapper[4771]: I0123 15:26:46.533131 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2vtmc/must-gather-rffk9"] Jan 23 15:26:46 crc kubenswrapper[4771]: I0123 15:26:46.582803 4771 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 15:26:46 crc kubenswrapper[4771]: I0123 15:26:46.978150 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/must-gather-rffk9" event={"ID":"fc59aab3-99ec-45d0-847c-3a1751073555","Type":"ContainerStarted","Data":"b1552adab809be8fffdefa27fbd8ec919e6b0c9993d69815a072212f44ccb731"} Jan 23 15:26:54 crc kubenswrapper[4771]: I0123 15:26:54.060643 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/must-gather-rffk9" event={"ID":"fc59aab3-99ec-45d0-847c-3a1751073555","Type":"ContainerStarted","Data":"c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf"} Jan 23 15:26:54 crc kubenswrapper[4771]: I0123 15:26:54.061589 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/must-gather-rffk9" event={"ID":"fc59aab3-99ec-45d0-847c-3a1751073555","Type":"ContainerStarted","Data":"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00"} Jan 23 15:26:54 crc kubenswrapper[4771]: I0123 15:26:54.087620 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2vtmc/must-gather-rffk9" podStartSLOduration=2.286101428 podStartE2EDuration="9.087589483s" podCreationTimestamp="2026-01-23 15:26:45 +0000 UTC" firstStartedPulling="2026-01-23 15:26:46.581457759 +0000 UTC m=+6847.603995384" lastFinishedPulling="2026-01-23 15:26:53.382945814 +0000 UTC m=+6854.405483439" observedRunningTime="2026-01-23 15:26:54.084501156 +0000 UTC m=+6855.107038771" watchObservedRunningTime="2026-01-23 15:26:54.087589483 +0000 UTC m=+6855.110127108" Jan 23 15:26:57 crc kubenswrapper[4771]: E0123 15:26:57.937037 4771 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.243:52022->38.102.83.243:45109: write tcp 38.102.83.243:52022->38.102.83.243:45109: write: broken pipe Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.460526 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-kjcmg"] Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.462472 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.613700 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfp8\" (UniqueName: \"kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.613826 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.719354 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfp8\" (UniqueName: \"kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.719683 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.719940 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.750060 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfp8\" (UniqueName: \"kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8\") pod \"crc-debug-kjcmg\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: I0123 15:26:58.791616 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:26:58 crc kubenswrapper[4771]: W0123 15:26:58.836947 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd5501ce_3519_4cf7_97ca_82db979e35f5.slice/crio-c12637901ad06cd6a71493e319263de23f316baaaae8e3e627bba4a8922ee0f1 WatchSource:0}: Error finding container c12637901ad06cd6a71493e319263de23f316baaaae8e3e627bba4a8922ee0f1: Status 404 returned error can't find the container with id c12637901ad06cd6a71493e319263de23f316baaaae8e3e627bba4a8922ee0f1 Jan 23 15:26:59 crc kubenswrapper[4771]: I0123 15:26:59.122425 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" event={"ID":"cd5501ce-3519-4cf7-97ca-82db979e35f5","Type":"ContainerStarted","Data":"c12637901ad06cd6a71493e319263de23f316baaaae8e3e627bba4a8922ee0f1"} Jan 23 15:27:00 crc kubenswrapper[4771]: I0123 15:27:00.312148 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:27:00 crc kubenswrapper[4771]: I0123 15:27:00.312443 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:27:12 crc kubenswrapper[4771]: I0123 15:27:12.296185 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" event={"ID":"cd5501ce-3519-4cf7-97ca-82db979e35f5","Type":"ContainerStarted","Data":"8160d4fb361cdae28c4c255ac785096e22640a5548f988e5b944348fa801645d"} Jan 23 15:27:12 crc kubenswrapper[4771]: I0123 15:27:12.324615 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" podStartSLOduration=1.483851067 podStartE2EDuration="14.324595149s" podCreationTimestamp="2026-01-23 15:26:58 +0000 UTC" firstStartedPulling="2026-01-23 15:26:58.84117891 +0000 UTC m=+6859.863716535" lastFinishedPulling="2026-01-23 15:27:11.681922992 +0000 UTC m=+6872.704460617" observedRunningTime="2026-01-23 15:27:12.318033732 +0000 UTC m=+6873.340571357" watchObservedRunningTime="2026-01-23 15:27:12.324595149 +0000 UTC m=+6873.347132774" Jan 23 15:27:30 crc kubenswrapper[4771]: I0123 15:27:30.312733 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:27:30 crc kubenswrapper[4771]: I0123 15:27:30.313310 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:27:59 crc kubenswrapper[4771]: I0123 15:27:59.879884 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd5501ce-3519-4cf7-97ca-82db979e35f5" containerID="8160d4fb361cdae28c4c255ac785096e22640a5548f988e5b944348fa801645d" exitCode=0 Jan 23 15:27:59 crc kubenswrapper[4771]: I0123 15:27:59.879934 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" event={"ID":"cd5501ce-3519-4cf7-97ca-82db979e35f5","Type":"ContainerDied","Data":"8160d4fb361cdae28c4c255ac785096e22640a5548f988e5b944348fa801645d"} Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.311638 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.311732 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.311817 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.313080 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.313224 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab" gracePeriod=600 Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.895336 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab" exitCode=0 Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.895434 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab"} Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.895870 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96"} Jan 23 15:28:00 crc kubenswrapper[4771]: I0123 15:28:00.895902 4771 scope.go:117] "RemoveContainer" containerID="28f0c6dafaa95960dab48042e357bddaedc671718eb63e8a1a1de9ff4843145b" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.054809 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.115204 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-kjcmg"] Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.137455 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-kjcmg"] Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.157537 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host\") pod \"cd5501ce-3519-4cf7-97ca-82db979e35f5\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.157629 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host" (OuterVolumeSpecName: "host") pod "cd5501ce-3519-4cf7-97ca-82db979e35f5" (UID: "cd5501ce-3519-4cf7-97ca-82db979e35f5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.157955 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwfp8\" (UniqueName: \"kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8\") pod \"cd5501ce-3519-4cf7-97ca-82db979e35f5\" (UID: \"cd5501ce-3519-4cf7-97ca-82db979e35f5\") " Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.158443 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd5501ce-3519-4cf7-97ca-82db979e35f5-host\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.187364 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8" (OuterVolumeSpecName: "kube-api-access-zwfp8") pod "cd5501ce-3519-4cf7-97ca-82db979e35f5" (UID: "cd5501ce-3519-4cf7-97ca-82db979e35f5"). InnerVolumeSpecName "kube-api-access-zwfp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.246998 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd5501ce-3519-4cf7-97ca-82db979e35f5" path="/var/lib/kubelet/pods/cd5501ce-3519-4cf7-97ca-82db979e35f5/volumes" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.261349 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwfp8\" (UniqueName: \"kubernetes.io/projected/cd5501ce-3519-4cf7-97ca-82db979e35f5-kube-api-access-zwfp8\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.910823 4771 scope.go:117] "RemoveContainer" containerID="8160d4fb361cdae28c4c255ac785096e22640a5548f988e5b944348fa801645d" Jan 23 15:28:01 crc kubenswrapper[4771]: I0123 15:28:01.910852 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-kjcmg" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.507017 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-nww25"] Jan 23 15:28:02 crc kubenswrapper[4771]: E0123 15:28:02.508338 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5501ce-3519-4cf7-97ca-82db979e35f5" containerName="container-00" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.508356 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5501ce-3519-4cf7-97ca-82db979e35f5" containerName="container-00" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.508705 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5501ce-3519-4cf7-97ca-82db979e35f5" containerName="container-00" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.509690 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.600944 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd99x\" (UniqueName: \"kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.601039 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.703583 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd99x\" (UniqueName: \"kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.703648 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.703895 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.731389 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd99x\" (UniqueName: \"kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x\") pod \"crc-debug-nww25\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.837779 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:02 crc kubenswrapper[4771]: I0123 15:28:02.931564 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-nww25" event={"ID":"47896e8f-669c-424b-bba9-1e8f17f2ff3a","Type":"ContainerStarted","Data":"9b43e6ca6644795e728a19cc7e73308156d8a7df6f06bc5a673b41cff1eb37d7"} Jan 23 15:28:03 crc kubenswrapper[4771]: I0123 15:28:03.945255 4771 generic.go:334] "Generic (PLEG): container finished" podID="47896e8f-669c-424b-bba9-1e8f17f2ff3a" containerID="43f63381fd7aee06c77cc71d8274a1eb08336c7344f4cb052cf15a1709b958df" exitCode=0 Jan 23 15:28:03 crc kubenswrapper[4771]: I0123 15:28:03.945298 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-nww25" event={"ID":"47896e8f-669c-424b-bba9-1e8f17f2ff3a","Type":"ContainerDied","Data":"43f63381fd7aee06c77cc71d8274a1eb08336c7344f4cb052cf15a1709b958df"} Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.140852 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.283492 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd99x\" (UniqueName: \"kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x\") pod \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.283553 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host\") pod \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\" (UID: \"47896e8f-669c-424b-bba9-1e8f17f2ff3a\") " Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.284691 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host" (OuterVolumeSpecName: "host") pod "47896e8f-669c-424b-bba9-1e8f17f2ff3a" (UID: "47896e8f-669c-424b-bba9-1e8f17f2ff3a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.316348 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x" (OuterVolumeSpecName: "kube-api-access-dd99x") pod "47896e8f-669c-424b-bba9-1e8f17f2ff3a" (UID: "47896e8f-669c-424b-bba9-1e8f17f2ff3a"). InnerVolumeSpecName "kube-api-access-dd99x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.386702 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/47896e8f-669c-424b-bba9-1e8f17f2ff3a-host\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.386941 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd99x\" (UniqueName: \"kubernetes.io/projected/47896e8f-669c-424b-bba9-1e8f17f2ff3a-kube-api-access-dd99x\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.717580 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-nww25"] Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.728175 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-nww25"] Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.988127 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b43e6ca6644795e728a19cc7e73308156d8a7df6f06bc5a673b41cff1eb37d7" Jan 23 15:28:05 crc kubenswrapper[4771]: I0123 15:28:05.988195 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-nww25" Jan 23 15:28:06 crc kubenswrapper[4771]: I0123 15:28:06.941393 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-pm65f"] Jan 23 15:28:06 crc kubenswrapper[4771]: E0123 15:28:06.942040 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47896e8f-669c-424b-bba9-1e8f17f2ff3a" containerName="container-00" Jan 23 15:28:06 crc kubenswrapper[4771]: I0123 15:28:06.942058 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="47896e8f-669c-424b-bba9-1e8f17f2ff3a" containerName="container-00" Jan 23 15:28:06 crc kubenswrapper[4771]: I0123 15:28:06.942361 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="47896e8f-669c-424b-bba9-1e8f17f2ff3a" containerName="container-00" Jan 23 15:28:06 crc kubenswrapper[4771]: I0123 15:28:06.943531 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.061725 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd64k\" (UniqueName: \"kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.062148 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.164475 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd64k\" (UniqueName: \"kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.164531 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.164728 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.203202 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd64k\" (UniqueName: \"kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k\") pod \"crc-debug-pm65f\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.250531 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47896e8f-669c-424b-bba9-1e8f17f2ff3a" path="/var/lib/kubelet/pods/47896e8f-669c-424b-bba9-1e8f17f2ff3a/volumes" Jan 23 15:28:07 crc kubenswrapper[4771]: I0123 15:28:07.269659 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:07 crc kubenswrapper[4771]: W0123 15:28:07.313698 4771 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e4b0339_01b6_411c_9cc6_d7bf6a844cb1.slice/crio-ed9c3622b531cd2a69811673eca84b33c0035fd945fe328611dddd58cf39a2d2 WatchSource:0}: Error finding container ed9c3622b531cd2a69811673eca84b33c0035fd945fe328611dddd58cf39a2d2: Status 404 returned error can't find the container with id ed9c3622b531cd2a69811673eca84b33c0035fd945fe328611dddd58cf39a2d2 Jan 23 15:28:08 crc kubenswrapper[4771]: I0123 15:28:08.014500 4771 generic.go:334] "Generic (PLEG): container finished" podID="1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" containerID="f2f96554ef530b9368b3d2cf4be95319917144facb1346e168231001c343efd9" exitCode=0 Jan 23 15:28:08 crc kubenswrapper[4771]: I0123 15:28:08.014583 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" event={"ID":"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1","Type":"ContainerDied","Data":"f2f96554ef530b9368b3d2cf4be95319917144facb1346e168231001c343efd9"} Jan 23 15:28:08 crc kubenswrapper[4771]: I0123 15:28:08.014977 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" event={"ID":"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1","Type":"ContainerStarted","Data":"ed9c3622b531cd2a69811673eca84b33c0035fd945fe328611dddd58cf39a2d2"} Jan 23 15:28:08 crc kubenswrapper[4771]: I0123 15:28:08.067317 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-pm65f"] Jan 23 15:28:08 crc kubenswrapper[4771]: I0123 15:28:08.078217 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2vtmc/crc-debug-pm65f"] Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.181000 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.373890 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host\") pod \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.374003 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host" (OuterVolumeSpecName: "host") pod "1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" (UID: "1e4b0339-01b6-411c-9cc6-d7bf6a844cb1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.374137 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd64k\" (UniqueName: \"kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k\") pod \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\" (UID: \"1e4b0339-01b6-411c-9cc6-d7bf6a844cb1\") " Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.374862 4771 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-host\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.382727 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k" (OuterVolumeSpecName: "kube-api-access-gd64k") pod "1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" (UID: "1e4b0339-01b6-411c-9cc6-d7bf6a844cb1"). InnerVolumeSpecName "kube-api-access-gd64k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:28:09 crc kubenswrapper[4771]: I0123 15:28:09.477736 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd64k\" (UniqueName: \"kubernetes.io/projected/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1-kube-api-access-gd64k\") on node \"crc\" DevicePath \"\"" Jan 23 15:28:10 crc kubenswrapper[4771]: I0123 15:28:10.048321 4771 scope.go:117] "RemoveContainer" containerID="f2f96554ef530b9368b3d2cf4be95319917144facb1346e168231001c343efd9" Jan 23 15:28:10 crc kubenswrapper[4771]: I0123 15:28:10.048630 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/crc-debug-pm65f" Jan 23 15:28:11 crc kubenswrapper[4771]: I0123 15:28:11.240857 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" path="/var/lib/kubelet/pods/1e4b0339-01b6-411c-9cc6-d7bf6a844cb1/volumes" Jan 23 15:28:49 crc kubenswrapper[4771]: I0123 15:28:49.740601 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7648986c66-7zlgv_ee927340-158a-4961-a78f-c8ae1fae907f/barbican-api/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.019064 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7648986c66-7zlgv_ee927340-158a-4961-a78f-c8ae1fae907f/barbican-api-log/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.445189 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f475d4c8-2cpwk_b369de15-be5b-46dc-9a6a-5bd2cdca01a3/barbican-keystone-listener/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.611188 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-58f475d4c8-2cpwk_b369de15-be5b-46dc-9a6a-5bd2cdca01a3/barbican-keystone-listener-log/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.621709 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5db75f46cc-8gg5z_6869d87c-129e-4f55-947d-b1dbcc1eb7fb/barbican-worker/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.744843 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5db75f46cc-8gg5z_6869d87c-129e-4f55-947d-b1dbcc1eb7fb/barbican-worker-log/0.log" Jan 23 15:28:50 crc kubenswrapper[4771]: I0123 15:28:50.932098 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-d92t4_30a335c9-357c-4ea4-8737-d8d795f1a05d/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.147324 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_91224b19-1b86-4154-ab78-a1004a2f9c0d/ceilometer-central-agent/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.248161 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_91224b19-1b86-4154-ab78-a1004a2f9c0d/proxy-httpd/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.312053 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_91224b19-1b86-4154-ab78-a1004a2f9c0d/sg-core/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.323342 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_91224b19-1b86-4154-ab78-a1004a2f9c0d/ceilometer-notification-agent/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.664598 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d81fb1e-8409-4355-8ffc-58fa97951a58/cinder-api-log/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.786816 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5d81fb1e-8409-4355-8ffc-58fa97951a58/cinder-api/0.log" Jan 23 15:28:51 crc kubenswrapper[4771]: I0123 15:28:51.879313 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b/cinder-backup/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.049814 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_9a5b7a0c-62d2-44c1-b1c2-3455ebc3523b/probe/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.101653 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b/cinder-scheduler/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.232296 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_eb06b4ea-8e71-4ee7-bfec-297f7bc2b79b/probe/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.446125 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_3f3184c8-6eb4-417e-aaab-707a459e8d6e/cinder-volume/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.531363 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_3f3184c8-6eb4-417e-aaab-707a459e8d6e/probe/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.679735 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_b9fc32dc-49be-4722-9430-260c9ae2da80/cinder-volume/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.792669 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_b9fc32dc-49be-4722-9430-260c9ae2da80/probe/0.log" Jan 23 15:28:52 crc kubenswrapper[4771]: I0123 15:28:52.893118 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-w4fwm_c9057e27-502a-48d6-b1d5-0fc8e198ab78/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.044462 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7dl5l_a9074e8c-55ca-48b3-ae5f-9b06c4c3da84/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.171017 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cdb8b545-cwd2l_47bf4b2c-e16d-47d8-b088-4cba3cf18643/init/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.407893 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cdb8b545-cwd2l_47bf4b2c-e16d-47d8-b088-4cba3cf18643/init/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.588849 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-dspqs_efe8756a-9628-43ad-a9f1-7ff7e65c5fc1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.642272 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cdb8b545-cwd2l_47bf4b2c-e16d-47d8-b088-4cba3cf18643/dnsmasq-dns/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.784639 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff136ca3-c0df-418b-b38f-9fed67d6ab21/glance-httpd/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.877574 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff136ca3-c0df-418b-b38f-9fed67d6ab21/glance-log/0.log" Jan 23 15:28:53 crc kubenswrapper[4771]: I0123 15:28:53.986746 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_361f4847-2b8b-40d4-b0cf-2eca9dc1c5db/glance-httpd/0.log" Jan 23 15:28:54 crc kubenswrapper[4771]: I0123 15:28:54.159332 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_361f4847-2b8b-40d4-b0cf-2eca9dc1c5db/glance-log/0.log" Jan 23 15:28:54 crc kubenswrapper[4771]: I0123 15:28:54.391152 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57cbdcc8d-5lcfn_dd12560a-7353-492b-8037-822d7aceb4e0/horizon/0.log" Jan 23 15:28:54 crc kubenswrapper[4771]: I0123 15:28:54.496585 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7sv4j_331d5ea1-caae-41d0-8986-01a8e698861c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.007917 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fdvf5_0d60c494-e705-4c10-aabf-2d07734e9048/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.360189 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486281-v84cc_22d9b7bd-62ab-4e1c-bbf4-d8b4fb440afd/keystone-cron/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.398238 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-57cbdcc8d-5lcfn_dd12560a-7353-492b-8037-822d7aceb4e0/horizon-log/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.547375 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c75694975-s585q_6860d79b-06bf-4ca1-b0a1-2d05a7b594c0/keystone-api/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.586171 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29486341-k2bv2_db0f18a5-7cb4-4bc4-8bd7-9efcf32aa013/keystone-cron/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.659975 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0b974486-d677-48cc-acaf-785af0b5555a/kube-state-metrics/0.log" Jan 23 15:28:55 crc kubenswrapper[4771]: I0123 15:28:55.878248 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-b4p7z_af9e27b6-338f-471a-ae8b-041038e92cfe/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:56 crc kubenswrapper[4771]: I0123 15:28:56.322399 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fd76d6849-9jhnn_c97f9128-e90c-482e-9b39-0505b4195ced/neutron-api/0.log" Jan 23 15:28:56 crc kubenswrapper[4771]: I0123 15:28:56.368007 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fd76d6849-9jhnn_c97f9128-e90c-482e-9b39-0505b4195ced/neutron-httpd/0.log" Jan 23 15:28:56 crc kubenswrapper[4771]: I0123 15:28:56.418790 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-gh5jt_0ec79d97-ee55-489b-935e-51ae32de7ca3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:57 crc kubenswrapper[4771]: I0123 15:28:57.301307 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aae7cd57-ed9a-4e28-bea0-c1240a462e64/nova-cell0-conductor-conductor/0.log" Jan 23 15:28:57 crc kubenswrapper[4771]: I0123 15:28:57.585117 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_77ab6d81-eada-4016-a21b-4319283e7b50/nova-cell1-conductor-conductor/0.log" Jan 23 15:28:57 crc kubenswrapper[4771]: I0123 15:28:57.748795 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_29cef0f5-6afd-4a1f-af4a-df21e4e9336a/nova-api-log/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.101162 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s6mfj_922839ae-8351-47a1-8478-bd565744b023/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.130256 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_33537960-af54-4801-9017-f01b27b5e8e0/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.358237 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_29cef0f5-6afd-4a1f-af4a-df21e4e9336a/nova-api-api/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.481784 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_927c2f44-e120-4c3a-8871-83f28acd42bb/nova-metadata-log/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.825668 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_90863ead-98c1-4258-b980-919471f6d76c/mysql-bootstrap/0.log" Jan 23 15:28:58 crc kubenswrapper[4771]: I0123 15:28:58.967750 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6cbd0d04-3607-4e92-b24c-e2004269a392/nova-scheduler-scheduler/0.log" Jan 23 15:28:59 crc kubenswrapper[4771]: I0123 15:28:59.474302 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_90863ead-98c1-4258-b980-919471f6d76c/galera/0.log" Jan 23 15:28:59 crc kubenswrapper[4771]: I0123 15:28:59.512548 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_90863ead-98c1-4258-b980-919471f6d76c/mysql-bootstrap/0.log" Jan 23 15:28:59 crc kubenswrapper[4771]: I0123 15:28:59.757164 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_34159c2a-f5ad-4b4c-a1c6-556001c43134/mysql-bootstrap/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.076955 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_34159c2a-f5ad-4b4c-a1c6-556001c43134/mysql-bootstrap/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.094239 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_34159c2a-f5ad-4b4c-a1c6-556001c43134/galera/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.414855 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d52cbbcd-9ccc-4f07-a407-15edb7bde07e/openstackclient/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.498199 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bt9nt_47d1109c-2e29-4a97-9c19-b4b50b2e4014/openstack-network-exporter/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.690850 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nxbfr_686807bb-241a-4fdb-bca8-0eba0745aed1/ovn-controller/0.log" Jan 23 15:29:00 crc kubenswrapper[4771]: I0123 15:29:00.979978 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7txgd_f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b/ovsdb-server-init/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.262070 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7txgd_f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b/ovsdb-server-init/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.274859 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7txgd_f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b/ovsdb-server/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.669485 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7txgd_f5ba4b66-fa9c-4a86-b4b9-7ce955500e1b/ovs-vswitchd/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.684973 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-s9bmb_161b748e-6a65-4a13-872a-5f00eb187424/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.874805 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e4f9071a-a9f7-46ca-905f-aac12e33f2f7/openstack-network-exporter/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.950291 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_927c2f44-e120-4c3a-8871-83f28acd42bb/nova-metadata-metadata/0.log" Jan 23 15:29:01 crc kubenswrapper[4771]: I0123 15:29:01.962135 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e4f9071a-a9f7-46ca-905f-aac12e33f2f7/ovn-northd/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.263483 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c67783c2-46a6-49f8-86e7-e32d83a45526/ovsdbserver-nb/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.279225 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c67783c2-46a6-49f8-86e7-e32d83a45526/openstack-network-exporter/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.434252 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_000f2478-86af-4e39-80c3-790a0457923e/openstack-network-exporter/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.573839 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_000f2478-86af-4e39-80c3-790a0457923e/ovsdbserver-sb/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.836171 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5997f6f59b-xjrp4_252f0b58-bb25-4d24-98a2-22cde8bb2daf/placement-api/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.963047 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b0542ded-3d93-4a78-a31b-f25fce8407e6/init-config-reloader/0.log" Jan 23 15:29:02 crc kubenswrapper[4771]: I0123 15:29:02.963130 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5997f6f59b-xjrp4_252f0b58-bb25-4d24-98a2-22cde8bb2daf/placement-log/0.log" Jan 23 15:29:03 crc kubenswrapper[4771]: I0123 15:29:03.264833 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b0542ded-3d93-4a78-a31b-f25fce8407e6/config-reloader/0.log" Jan 23 15:29:03 crc kubenswrapper[4771]: I0123 15:29:03.282902 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b0542ded-3d93-4a78-a31b-f25fce8407e6/prometheus/0.log" Jan 23 15:29:03 crc kubenswrapper[4771]: I0123 15:29:03.321023 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b0542ded-3d93-4a78-a31b-f25fce8407e6/thanos-sidecar/0.log" Jan 23 15:29:03 crc kubenswrapper[4771]: I0123 15:29:03.347734 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b0542ded-3d93-4a78-a31b-f25fce8407e6/init-config-reloader/0.log" Jan 23 15:29:03 crc kubenswrapper[4771]: I0123 15:29:03.716804 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_14b1f3d7-6878-46af-ae81-88676519f44b/setup-container/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.288663 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_add41260-19c8-4989-a0a9-97a93316c6e8/setup-container/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.325868 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_14b1f3d7-6878-46af-ae81-88676519f44b/setup-container/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.427381 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_14b1f3d7-6878-46af-ae81-88676519f44b/rabbitmq/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.556448 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_add41260-19c8-4989-a0a9-97a93316c6e8/rabbitmq/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.575305 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_add41260-19c8-4989-a0a9-97a93316c6e8/setup-container/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.687870 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_12ed4577-dc9c-4535-b218-fe3580114a6f/setup-container/0.log" Jan 23 15:29:04 crc kubenswrapper[4771]: I0123 15:29:04.997229 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_12ed4577-dc9c-4535-b218-fe3580114a6f/setup-container/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.026839 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_12ed4577-dc9c-4535-b218-fe3580114a6f/rabbitmq/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.076977 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-fpj6h_f8f8277b-12c4-47fd-994c-22994850fec0/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.381095 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fn84f_79fe5e81-6503-49e2-ae4d-35cc605ac5ae/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.410274 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hnztm_b1961816-bdc5-454b-a6e6-a21748cf812f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.709539 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-fvdh7_1259451d-71d5-486c-9046-3f03879ecfeb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:05 crc kubenswrapper[4771]: I0123 15:29:05.837597 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-j4gml_d3516501-2232-4e13-b529-5befbc170273/ssh-known-hosts-edpm-deployment/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.173278 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-859b449dbf-hmxw5_34108489-1bd6-4b93-a840-b58d45b1e861/proxy-server/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.391275 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-25zc4_de1618cb-bde8-4c44-846b-aabcbb2e3698/swift-ring-rebalance/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.455191 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-859b449dbf-hmxw5_34108489-1bd6-4b93-a840-b58d45b1e861/proxy-httpd/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.573038 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/account-auditor/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.680366 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/account-reaper/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.871624 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/account-replicator/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.872484 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/account-server/0.log" Jan 23 15:29:06 crc kubenswrapper[4771]: I0123 15:29:06.935386 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/container-auditor/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.124215 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/container-replicator/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.130891 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/container-server/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.183737 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/container-updater/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.273569 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/object-auditor/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.386775 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/object-expirer/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.504853 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/object-server/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.506569 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/object-replicator/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.532101 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/object-updater/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.657985 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/rsync/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.820714 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cb429d80-3c7c-4014-9a5c-d40256e70014/swift-recon-cron/0.log" Jan 23 15:29:07 crc kubenswrapper[4771]: I0123 15:29:07.850531 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-czrj8_af6c0a2c-2354-4db8-9468-951607428157/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:08 crc kubenswrapper[4771]: I0123 15:29:08.212730 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-96chb_45d51499-49b6-43d8-a21f-c9984307c689/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 23 15:29:08 crc kubenswrapper[4771]: I0123 15:29:08.361080 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_4b1420d2-dfc0-492c-b21d-30eda7e8c59d/tempest-tests-tempest-tests-runner/0.log" Jan 23 15:29:09 crc kubenswrapper[4771]: I0123 15:29:09.500490 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_a9b26a6d-89f0-40b9-9887-5d8aedf33ad5/watcher-applier/0.log" Jan 23 15:29:10 crc kubenswrapper[4771]: I0123 15:29:10.489959 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_588014e9-5ed0-4dfc-862e-ccafe84d7c3c/watcher-api-log/0.log" Jan 23 15:29:12 crc kubenswrapper[4771]: I0123 15:29:12.429769 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_b05a43a3-db12-4944-a8d7-e2f6c27b48f3/watcher-decision-engine/0.log" Jan 23 15:29:16 crc kubenswrapper[4771]: I0123 15:29:16.103340 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_588014e9-5ed0-4dfc-862e-ccafe84d7c3c/watcher-api/0.log" Jan 23 15:29:28 crc kubenswrapper[4771]: I0123 15:29:28.300132 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_972f2298-461d-46ec-a00a-19ea21a500a5/memcached/0.log" Jan 23 15:29:43 crc kubenswrapper[4771]: I0123 15:29:43.957743 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/util/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.215469 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/util/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.232844 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/pull/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.288110 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/pull/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.442474 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/extract/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.451251 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/util/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.457400 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_474a4216d9c0f2838260b1f8036840d136852921f149f1ab96686b5b332jpp6_d6ed33f7-5653-4575-9457-22ec51d0e961/pull/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.752797 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-9pg25_a6bb27ef-c367-4c44-9137-7e713f44271d/manager/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.916091 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-p4s4m_e5c92a50-6224-413e-b4ca-9bdca838de01/manager/0.log" Jan 23 15:29:44 crc kubenswrapper[4771]: I0123 15:29:44.977096 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-qtsk2_7449c4dc-9594-459b-9e89-23cb5e86139b/manager/0.log" Jan 23 15:29:45 crc kubenswrapper[4771]: I0123 15:29:45.140598 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-g7vwj_f27f9a1d-08bf-4576-90e8-0d5e9438b3d7/manager/0.log" Jan 23 15:29:45 crc kubenswrapper[4771]: I0123 15:29:45.274434 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-br5g2_02c8537d-9470-4887-9fb9-0700448bbc40/manager/0.log" Jan 23 15:29:45 crc kubenswrapper[4771]: I0123 15:29:45.460749 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-x7skn_a3ff2047-0c52-4dee-a435-c88cb8c2690d/manager/0.log" Jan 23 15:29:45 crc kubenswrapper[4771]: I0123 15:29:45.959110 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-l2gvh_5d096797-513e-4b08-afe9-0c19eb099a3d/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.260784 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-mxmqk_b4af8681-84d1-4cf7-b3a6-b167146e1973/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.275529 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-pvkv5_8cb42a29-70c6-4e1a-9c5e-bdc8e5d69570/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.341932 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-7tzww_368370f7-de60-484e-8ad6-35d0298c2520/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.609177 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-m7zxn_2e2e8c05-b33d-410f-ad27-e80ed0a243ee/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.658485 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-c9h8l_ec9af354-7d56-47ee-aa2f-be57edf2c7bc/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.874039 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-xpp8r_06cccb54-4ed2-4ee7-af3e-c26532e49b23/manager/0.log" Jan 23 15:29:46 crc kubenswrapper[4771]: I0123 15:29:46.901245 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-89prd_d703eb08-df59-4676-a522-c869982a8772/manager/0.log" Jan 23 15:29:47 crc kubenswrapper[4771]: I0123 15:29:47.094084 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8545ccsr_4e79abf5-0755-4fec-998c-b4eba8ebe531/manager/0.log" Jan 23 15:29:47 crc kubenswrapper[4771]: I0123 15:29:47.322085 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c9f89db4c-99h7l_b1cee457-8610-42c9-be6e-3cf0f8628aba/operator/0.log" Jan 23 15:29:47 crc kubenswrapper[4771]: I0123 15:29:47.734879 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-q7jx4_c714cbf2-6b46-41ca-8469-1d4ed6545e80/registry-server/0.log" Jan 23 15:29:47 crc kubenswrapper[4771]: I0123 15:29:47.960444 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-sx4v9_55997410-62dd-4965-938a-6e1cdfba0cd5/manager/0.log" Jan 23 15:29:48 crc kubenswrapper[4771]: I0123 15:29:48.351320 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-dgqzc_6ac8c39a-105d-4ab1-afd6-4786c9aa1386/manager/0.log" Jan 23 15:29:48 crc kubenswrapper[4771]: I0123 15:29:48.487682 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vn6bc_02ecde71-15d7-4cf0-8928-505b2f0899fd/operator/0.log" Jan 23 15:29:48 crc kubenswrapper[4771]: I0123 15:29:48.735726 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-m76mt_25f36fa7-ebf2-406b-bb09-f3a83fd19685/manager/0.log" Jan 23 15:29:48 crc kubenswrapper[4771]: I0123 15:29:48.929046 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68f54d99d8-mwsk5_08c96bfa-007d-41c5-a03a-4e92c9083c3f/manager/0.log" Jan 23 15:29:49 crc kubenswrapper[4771]: I0123 15:29:49.100749 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-5s27m_763a3e24-643d-473f-bbb2-d7f4816a0b58/manager/0.log" Jan 23 15:29:49 crc kubenswrapper[4771]: I0123 15:29:49.130482 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-nm9jp_6c48b549-8b0b-4914-be38-157d50994b3b/manager/0.log" Jan 23 15:29:49 crc kubenswrapper[4771]: I0123 15:29:49.291298 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-78dbdc4d57-djf7x_eb45ce0a-9090-4553-b2b3-6d025d099f0f/manager/0.log" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.158152 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf"] Jan 23 15:30:00 crc kubenswrapper[4771]: E0123 15:30:00.159231 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" containerName="container-00" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.159245 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" containerName="container-00" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.159488 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4b0339-01b6-411c-9cc6-d7bf6a844cb1" containerName="container-00" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.160307 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.163002 4771 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.163183 4771 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.180074 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf"] Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.311662 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.311730 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.315844 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssj4\" (UniqueName: \"kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.316073 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.316110 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.419676 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssj4\" (UniqueName: \"kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.420101 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.420133 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.421284 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.434309 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.447149 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssj4\" (UniqueName: \"kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4\") pod \"collect-profiles-29486370-bvdxf\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:00 crc kubenswrapper[4771]: I0123 15:30:00.489072 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:01 crc kubenswrapper[4771]: I0123 15:30:01.011501 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf"] Jan 23 15:30:01 crc kubenswrapper[4771]: I0123 15:30:01.444105 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" event={"ID":"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e","Type":"ContainerStarted","Data":"94f0f3e7462652e396eb4875d64d227dbaae8e24eaaaea81db73978ec775f316"} Jan 23 15:30:01 crc kubenswrapper[4771]: I0123 15:30:01.444445 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" event={"ID":"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e","Type":"ContainerStarted","Data":"245e3036aac16ead7f8cb12d3c061bb90e450bf9d8afed2a3104643d361375be"} Jan 23 15:30:01 crc kubenswrapper[4771]: I0123 15:30:01.469149 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" podStartSLOduration=1.468928209 podStartE2EDuration="1.468928209s" podCreationTimestamp="2026-01-23 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 15:30:01.463352234 +0000 UTC m=+7042.485889869" watchObservedRunningTime="2026-01-23 15:30:01.468928209 +0000 UTC m=+7042.491465834" Jan 23 15:30:02 crc kubenswrapper[4771]: I0123 15:30:02.463633 4771 generic.go:334] "Generic (PLEG): container finished" podID="c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" containerID="94f0f3e7462652e396eb4875d64d227dbaae8e24eaaaea81db73978ec775f316" exitCode=0 Jan 23 15:30:02 crc kubenswrapper[4771]: I0123 15:30:02.466347 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" event={"ID":"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e","Type":"ContainerDied","Data":"94f0f3e7462652e396eb4875d64d227dbaae8e24eaaaea81db73978ec775f316"} Jan 23 15:30:03 crc kubenswrapper[4771]: I0123 15:30:03.853310 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.009453 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hssj4\" (UniqueName: \"kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4\") pod \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.009519 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume\") pod \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.009559 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume\") pod \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\" (UID: \"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e\") " Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.010323 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" (UID: "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.026644 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" (UID: "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.027594 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4" (OuterVolumeSpecName: "kube-api-access-hssj4") pod "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" (UID: "c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e"). InnerVolumeSpecName "kube-api-access-hssj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.111889 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hssj4\" (UniqueName: \"kubernetes.io/projected/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-kube-api-access-hssj4\") on node \"crc\" DevicePath \"\"" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.111930 4771 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.111940 4771 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.488240 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" event={"ID":"c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e","Type":"ContainerDied","Data":"245e3036aac16ead7f8cb12d3c061bb90e450bf9d8afed2a3104643d361375be"} Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.488731 4771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245e3036aac16ead7f8cb12d3c061bb90e450bf9d8afed2a3104643d361375be" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.488294 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486370-bvdxf" Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.547679 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn"] Jan 23 15:30:04 crc kubenswrapper[4771]: I0123 15:30:04.558029 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-vb2pn"] Jan 23 15:30:05 crc kubenswrapper[4771]: I0123 15:30:05.241592 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b915df75-b9fb-44ce-8604-e65d8990cf26" path="/var/lib/kubelet/pods/b915df75-b9fb-44ce-8604-e65d8990cf26/volumes" Jan 23 15:30:10 crc kubenswrapper[4771]: I0123 15:30:10.350905 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-g658k_60374146-a25c-42d9-82d8-dcad9368144c/control-plane-machine-set-operator/0.log" Jan 23 15:30:10 crc kubenswrapper[4771]: I0123 15:30:10.573654 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z5t5f_ef981f89-01c0-438a-a1b3-1f0e18d3496e/kube-rbac-proxy/0.log" Jan 23 15:30:10 crc kubenswrapper[4771]: I0123 15:30:10.622193 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z5t5f_ef981f89-01c0-438a-a1b3-1f0e18d3496e/machine-api-operator/0.log" Jan 23 15:30:26 crc kubenswrapper[4771]: I0123 15:30:26.062226 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-njl7x_caf0d360-cd4f-4a23-8104-162c00e9b1b3/cert-manager-controller/0.log" Jan 23 15:30:26 crc kubenswrapper[4771]: I0123 15:30:26.303617 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-xfpd8_a4f83ec4-9e43-4e12-a479-5df0667e28f9/cert-manager-cainjector/0.log" Jan 23 15:30:26 crc kubenswrapper[4771]: I0123 15:30:26.327938 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-g98gg_28bae9dd-6a27-42bc-b05c-9e14f92a5afe/cert-manager-webhook/0.log" Jan 23 15:30:30 crc kubenswrapper[4771]: I0123 15:30:30.312632 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:30:30 crc kubenswrapper[4771]: I0123 15:30:30.313256 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:30:42 crc kubenswrapper[4771]: I0123 15:30:42.500422 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-5zrsc_34806cb5-67fe-4e5e-a50d-3993df18ceef/nmstate-console-plugin/0.log" Jan 23 15:30:42 crc kubenswrapper[4771]: I0123 15:30:42.730385 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q45wt_0f261a76-9325-4a93-b553-7937534cc5a9/nmstate-handler/0.log" Jan 23 15:30:42 crc kubenswrapper[4771]: I0123 15:30:42.772283 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ggk4b_285b790a-ad5d-4f7a-aba6-ffc18d69d449/kube-rbac-proxy/0.log" Jan 23 15:30:42 crc kubenswrapper[4771]: I0123 15:30:42.890655 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ggk4b_285b790a-ad5d-4f7a-aba6-ffc18d69d449/nmstate-metrics/0.log" Jan 23 15:30:42 crc kubenswrapper[4771]: I0123 15:30:42.958543 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-cbcpg_c67bd392-c996-44eb-af78-1822d2b08b16/nmstate-operator/0.log" Jan 23 15:30:43 crc kubenswrapper[4771]: I0123 15:30:43.082113 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-hnhr7_02c5a515-61c6-46ae-ba60-5c1c04e7bcfd/nmstate-webhook/0.log" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.799037 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:30:53 crc kubenswrapper[4771]: E0123 15:30:53.800152 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" containerName="collect-profiles" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.800168 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" containerName="collect-profiles" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.800401 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4552ec7-92f9-4050-b5b4-97fdcbc9ac7e" containerName="collect-profiles" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.802028 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.808570 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.961698 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.962678 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:53 crc kubenswrapper[4771]: I0123 15:30:53.962939 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knx4s\" (UniqueName: \"kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.065466 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.065710 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.065775 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knx4s\" (UniqueName: \"kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.066329 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.066338 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.094881 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knx4s\" (UniqueName: \"kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s\") pod \"redhat-marketplace-jddgj\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.171529 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:30:54 crc kubenswrapper[4771]: I0123 15:30:54.700982 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:30:55 crc kubenswrapper[4771]: I0123 15:30:55.062467 4771 generic.go:334] "Generic (PLEG): container finished" podID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerID="8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513" exitCode=0 Jan 23 15:30:55 crc kubenswrapper[4771]: I0123 15:30:55.062784 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerDied","Data":"8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513"} Jan 23 15:30:55 crc kubenswrapper[4771]: I0123 15:30:55.062903 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerStarted","Data":"a0b0032f50e84bda38e957e83eda38f7d109a0a0d0a79581fea764d4ed33e818"} Jan 23 15:30:56 crc kubenswrapper[4771]: I0123 15:30:56.078256 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerStarted","Data":"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8"} Jan 23 15:30:57 crc kubenswrapper[4771]: I0123 15:30:57.093559 4771 generic.go:334] "Generic (PLEG): container finished" podID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerID="789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8" exitCode=0 Jan 23 15:30:57 crc kubenswrapper[4771]: I0123 15:30:57.094048 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerDied","Data":"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8"} Jan 23 15:30:58 crc kubenswrapper[4771]: I0123 15:30:58.122220 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerStarted","Data":"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81"} Jan 23 15:30:58 crc kubenswrapper[4771]: I0123 15:30:58.151836 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jddgj" podStartSLOduration=2.70034434 podStartE2EDuration="5.151806067s" podCreationTimestamp="2026-01-23 15:30:53 +0000 UTC" firstStartedPulling="2026-01-23 15:30:55.064904121 +0000 UTC m=+7096.087441746" lastFinishedPulling="2026-01-23 15:30:57.516365858 +0000 UTC m=+7098.538903473" observedRunningTime="2026-01-23 15:30:58.146615643 +0000 UTC m=+7099.169153268" watchObservedRunningTime="2026-01-23 15:30:58.151806067 +0000 UTC m=+7099.174343692" Jan 23 15:30:59 crc kubenswrapper[4771]: I0123 15:30:59.479347 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w6ztm_6b8c9e01-91e4-466a-bcca-f2302f8cf535/prometheus-operator/0.log" Jan 23 15:30:59 crc kubenswrapper[4771]: I0123 15:30:59.701335 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq_f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b/prometheus-operator-admission-webhook/0.log" Jan 23 15:30:59 crc kubenswrapper[4771]: I0123 15:30:59.764825 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8_1b61dbbd-9e69-43e6-9a83-68115a11bef6/prometheus-operator-admission-webhook/0.log" Jan 23 15:30:59 crc kubenswrapper[4771]: I0123 15:30:59.979328 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hp46t_4d0ca4c2-0c2e-438c-a300-71ec0d624905/perses-operator/0.log" Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.150265 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tldhc_e93a101d-0d44-4b57-8ec6-bd9911f2e61b/operator/0.log" Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.312058 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.312110 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.312151 4771 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z299d" Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.313049 4771 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96"} pod="openshift-machine-config-operator/machine-config-daemon-z299d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 15:31:00 crc kubenswrapper[4771]: I0123 15:31:00.313104 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" containerID="cri-o://736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" gracePeriod=600 Jan 23 15:31:00 crc kubenswrapper[4771]: E0123 15:31:00.942650 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:01 crc kubenswrapper[4771]: I0123 15:31:01.153522 4771 generic.go:334] "Generic (PLEG): container finished" podID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" exitCode=0 Jan 23 15:31:01 crc kubenswrapper[4771]: I0123 15:31:01.153605 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerDied","Data":"736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96"} Jan 23 15:31:01 crc kubenswrapper[4771]: I0123 15:31:01.154644 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:31:01 crc kubenswrapper[4771]: E0123 15:31:01.154909 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:01 crc kubenswrapper[4771]: I0123 15:31:01.155061 4771 scope.go:117] "RemoveContainer" containerID="e9ab99fde83c203e14c42cfd2a490fda6a5c9857b22a7eec54327e57574ad4ab" Jan 23 15:31:02 crc kubenswrapper[4771]: I0123 15:31:02.440153 4771 scope.go:117] "RemoveContainer" containerID="d3fa10be9f000948bebeec0221182dacc167eee2793765068bb3edc21db9d338" Jan 23 15:31:04 crc kubenswrapper[4771]: I0123 15:31:04.172103 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:04 crc kubenswrapper[4771]: I0123 15:31:04.172489 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:04 crc kubenswrapper[4771]: I0123 15:31:04.227434 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:04 crc kubenswrapper[4771]: I0123 15:31:04.286389 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:04 crc kubenswrapper[4771]: I0123 15:31:04.472556 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.223963 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jddgj" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="registry-server" containerID="cri-o://a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81" gracePeriod=2 Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.757854 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.870199 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities\") pod \"477b72ad-f50b-43f5-9ef6-ead43428f566\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.870403 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knx4s\" (UniqueName: \"kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s\") pod \"477b72ad-f50b-43f5-9ef6-ead43428f566\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.870718 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content\") pod \"477b72ad-f50b-43f5-9ef6-ead43428f566\" (UID: \"477b72ad-f50b-43f5-9ef6-ead43428f566\") " Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.871195 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities" (OuterVolumeSpecName: "utilities") pod "477b72ad-f50b-43f5-9ef6-ead43428f566" (UID: "477b72ad-f50b-43f5-9ef6-ead43428f566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.872224 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.892589 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "477b72ad-f50b-43f5-9ef6-ead43428f566" (UID: "477b72ad-f50b-43f5-9ef6-ead43428f566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.893896 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s" (OuterVolumeSpecName: "kube-api-access-knx4s") pod "477b72ad-f50b-43f5-9ef6-ead43428f566" (UID: "477b72ad-f50b-43f5-9ef6-ead43428f566"). InnerVolumeSpecName "kube-api-access-knx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.974480 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knx4s\" (UniqueName: \"kubernetes.io/projected/477b72ad-f50b-43f5-9ef6-ead43428f566-kube-api-access-knx4s\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:06 crc kubenswrapper[4771]: I0123 15:31:06.974853 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477b72ad-f50b-43f5-9ef6-ead43428f566-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.235727 4771 generic.go:334] "Generic (PLEG): container finished" podID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerID="a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81" exitCode=0 Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.235820 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jddgj" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.242743 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerDied","Data":"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81"} Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.242793 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jddgj" event={"ID":"477b72ad-f50b-43f5-9ef6-ead43428f566","Type":"ContainerDied","Data":"a0b0032f50e84bda38e957e83eda38f7d109a0a0d0a79581fea764d4ed33e818"} Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.242816 4771 scope.go:117] "RemoveContainer" containerID="a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.267681 4771 scope.go:117] "RemoveContainer" containerID="789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.274619 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.320093 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jddgj"] Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.322581 4771 scope.go:117] "RemoveContainer" containerID="8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.354944 4771 scope.go:117] "RemoveContainer" containerID="a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81" Jan 23 15:31:07 crc kubenswrapper[4771]: E0123 15:31:07.355508 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81\": container with ID starting with a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81 not found: ID does not exist" containerID="a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.355545 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81"} err="failed to get container status \"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81\": rpc error: code = NotFound desc = could not find container \"a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81\": container with ID starting with a7bd94ffa8498fee9bdb6deb656b9cb537ab907d65547f78f647727ebea00f81 not found: ID does not exist" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.355567 4771 scope.go:117] "RemoveContainer" containerID="789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8" Jan 23 15:31:07 crc kubenswrapper[4771]: E0123 15:31:07.355844 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8\": container with ID starting with 789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8 not found: ID does not exist" containerID="789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.355865 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8"} err="failed to get container status \"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8\": rpc error: code = NotFound desc = could not find container \"789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8\": container with ID starting with 789611d771aef82e79953e407f70304e59edfd962ec1eca7362db2984eeeabb8 not found: ID does not exist" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.355893 4771 scope.go:117] "RemoveContainer" containerID="8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513" Jan 23 15:31:07 crc kubenswrapper[4771]: E0123 15:31:07.356075 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513\": container with ID starting with 8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513 not found: ID does not exist" containerID="8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513" Jan 23 15:31:07 crc kubenswrapper[4771]: I0123 15:31:07.356092 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513"} err="failed to get container status \"8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513\": rpc error: code = NotFound desc = could not find container \"8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513\": container with ID starting with 8bd10dc169d19dfd28ddcce89cd5b058df5970c276ce252159e8aaa4cdb05513 not found: ID does not exist" Jan 23 15:31:09 crc kubenswrapper[4771]: I0123 15:31:09.241518 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" path="/var/lib/kubelet/pods/477b72ad-f50b-43f5-9ef6-ead43428f566/volumes" Jan 23 15:31:14 crc kubenswrapper[4771]: I0123 15:31:14.228917 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:31:14 crc kubenswrapper[4771]: E0123 15:31:14.229853 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:15 crc kubenswrapper[4771]: I0123 15:31:15.795653 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-ldmbx_1b312eb2-b256-4161-8f3c-dce680d9dbfc/kube-rbac-proxy/0.log" Jan 23 15:31:15 crc kubenswrapper[4771]: I0123 15:31:15.867379 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-ldmbx_1b312eb2-b256-4161-8f3c-dce680d9dbfc/controller/0.log" Jan 23 15:31:15 crc kubenswrapper[4771]: I0123 15:31:15.997784 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-frr-files/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.213674 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-reloader/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.227733 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-reloader/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.230639 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-metrics/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.256842 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-frr-files/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.472977 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-frr-files/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.473205 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-metrics/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.518926 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-reloader/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.530777 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-metrics/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.715020 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-frr-files/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.720294 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-reloader/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.749977 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/cp-metrics/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.782917 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/controller/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.930328 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/frr-metrics/0.log" Jan 23 15:31:16 crc kubenswrapper[4771]: I0123 15:31:16.979179 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/kube-rbac-proxy/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.007388 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/kube-rbac-proxy-frr/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.232590 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/reloader/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.324781 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-wgw65_ef45af5a-2541-4e3b-92b3-6b09ca1ffcf2/frr-k8s-webhook-server/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.547157 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5f8bd5d6b5-dd58q_a8d73a95-8330-4f66-ab99-024cec0be447/manager/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.769972 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f89cf578d-zwzwx_a6fc3f94-e22a-4d26-a74c-105d3b51173b/webhook-server/0.log" Jan 23 15:31:17 crc kubenswrapper[4771]: I0123 15:31:17.859667 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jx8m8_df96f6c9-141e-453a-a114-573d7604e8a7/kube-rbac-proxy/0.log" Jan 23 15:31:18 crc kubenswrapper[4771]: I0123 15:31:18.726267 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jx8m8_df96f6c9-141e-453a-a114-573d7604e8a7/speaker/0.log" Jan 23 15:31:19 crc kubenswrapper[4771]: I0123 15:31:19.164082 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kglff_741dbcde-6dfb-4b44-89fb-f020af39320d/frr/0.log" Jan 23 15:31:29 crc kubenswrapper[4771]: I0123 15:31:29.237043 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:31:29 crc kubenswrapper[4771]: E0123 15:31:29.238926 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:32 crc kubenswrapper[4771]: I0123 15:31:32.549004 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/util/0.log" Jan 23 15:31:32 crc kubenswrapper[4771]: I0123 15:31:32.835777 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/pull/0.log" Jan 23 15:31:32 crc kubenswrapper[4771]: I0123 15:31:32.880081 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/util/0.log" Jan 23 15:31:32 crc kubenswrapper[4771]: I0123 15:31:32.933778 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/pull/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.100916 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/util/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.156519 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/pull/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.179403 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcb8h5m_9578571d-8113-4e8e-b829-4c7283c4fbf1/extract/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.311853 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/util/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.549859 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/util/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.552875 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/pull/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.607396 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/pull/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.797805 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/extract/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.843955 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/pull/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.851396 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713vn7jj_5b7807a4-f406-498f-b7c4-a1bbe8ab5957/util/0.log" Jan 23 15:31:33 crc kubenswrapper[4771]: I0123 15:31:33.998685 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/util/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.203671 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/pull/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.219601 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/util/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.220460 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/pull/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.444700 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/util/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.460901 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/pull/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.482978 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f087s65s_f5bf361f-9ef1-4f6f-bc47-c428011faeac/extract/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.668521 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-utilities/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.833222 4771 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:31:34 crc kubenswrapper[4771]: E0123 15:31:34.834062 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="extract-utilities" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.834090 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="extract-utilities" Jan 23 15:31:34 crc kubenswrapper[4771]: E0123 15:31:34.834113 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="extract-content" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.834121 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="extract-content" Jan 23 15:31:34 crc kubenswrapper[4771]: E0123 15:31:34.834172 4771 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="registry-server" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.834181 4771 state_mem.go:107] "Deleted CPUSet assignment" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="registry-server" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.834541 4771 memory_manager.go:354] "RemoveStaleState removing state" podUID="477b72ad-f50b-43f5-9ef6-ead43428f566" containerName="registry-server" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.836804 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.847397 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.890201 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-utilities/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.939174 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-content/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.943843 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-content/0.log" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.952836 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.953265 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:34 crc kubenswrapper[4771]: I0123 15:31:34.953446 4771 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7gj\" (UniqueName: \"kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.055748 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.055898 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7gj\" (UniqueName: \"kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.055984 4771 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.056522 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.056768 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.081750 4771 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7gj\" (UniqueName: \"kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj\") pod \"redhat-operators-8xrmz\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.172224 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-content/0.log" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.182605 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/extract-utilities/0.log" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.185473 4771 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.595070 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-utilities/0.log" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.800581 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-96tmq_9fab9170-3473-4996-b3c4-55b1f7b4f05a/registry-server/0.log" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.816293 4771 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.988864 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-content/0.log" Jan 23 15:31:35 crc kubenswrapper[4771]: I0123 15:31:35.999461 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-content/0.log" Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.054100 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-utilities/0.log" Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.293474 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-utilities/0.log" Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.348233 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/extract-content/0.log" Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.599859 4771 generic.go:334] "Generic (PLEG): container finished" podID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" containerID="3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08" exitCode=0 Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.599918 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerDied","Data":"3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08"} Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.599952 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerStarted","Data":"4fade07a9e4ab6a35b571506b486bc31d77d0640e4a2a7c5c55de7b9cb55b0d1"} Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.698488 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-utilities/0.log" Jan 23 15:31:36 crc kubenswrapper[4771]: I0123 15:31:36.919127 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ch8sp_26a8628d-5423-49ce-b186-d31351516531/registry-server/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.050384 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-content/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.099638 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-utilities/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.139036 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-content/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.345985 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-content/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.365887 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/extract-utilities/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.465034 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-utilities/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.465565 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dwmns_52249bb5-fa7b-405e-a13d-be7d60744ae9/registry-server/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.612696 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerStarted","Data":"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b"} Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.697953 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-content/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.746893 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-content/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.756021 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-utilities/0.log" Jan 23 15:31:37 crc kubenswrapper[4771]: I0123 15:31:37.959480 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.023794 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.040262 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/extract-content/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.130792 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mxvj7_42c56fef-fece-4a79-ac6e-dc70d22b414c/registry-server/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.279746 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.321653 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-content/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.350429 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-content/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.529281 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-content/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.577016 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.579927 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.715181 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q6njm_f7ac08f4-4813-42fa-939d-c93bdb71e6de/registry-server/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.879019 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-utilities/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.918987 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-content/0.log" Jan 23 15:31:38 crc kubenswrapper[4771]: I0123 15:31:38.925928 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-content/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.120760 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-content/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.176974 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/extract-utilities/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.206160 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-utilities/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.333631 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-b2dzb_b5601436-c1e4-461d-8e2a-23c32e5afc54/registry-server/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.488723 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-utilities/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.538393 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-content/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.546578 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-content/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.759733 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-utilities/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.803458 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/extract-content/0.log" Jan 23 15:31:39 crc kubenswrapper[4771]: I0123 15:31:39.811320 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-utilities/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.091850 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-utilities/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.091922 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.094874 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c4x7x_e835f247-cf2e-43ee-9785-143a54f1dc97/registry-server/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.131043 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.377037 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-utilities/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.377867 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.524476 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-utilities/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.551653 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j5d5v_5be36885-6a91-48ec-b6af-a5abcbaaed97/registry-server/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.684690 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.704214 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.731651 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-utilities/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.963619 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-content/0.log" Jan 23 15:31:40 crc kubenswrapper[4771]: I0123 15:31:40.982863 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/extract-utilities/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.088355 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-tljgp_9a16cfd1-e3c0-43be-a517-fa7b6e4bf56a/marketplace-operator/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.305793 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-utilities/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.409376 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-utilities/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.509930 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-content/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.520240 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-content/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.687785 4771 generic.go:334] "Generic (PLEG): container finished" podID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" containerID="803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b" exitCode=0 Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.687842 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerDied","Data":"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b"} Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.802161 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-content/0.log" Jan 23 15:31:41 crc kubenswrapper[4771]: I0123 15:31:41.834202 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/extract-utilities/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.022000 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-utilities/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.210025 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-content/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.226048 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-utilities/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.242740 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-content/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.448103 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-content/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.448138 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/extract-utilities/0.log" Jan 23 15:31:42 crc kubenswrapper[4771]: I0123 15:31:42.999401 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9fcfx_ae249f45-cd5f-4837-9b26-cd4981147454/registry-server/0.log" Jan 23 15:31:43 crc kubenswrapper[4771]: I0123 15:31:43.607860 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-j8jsw_77a36b6a-701f-46d6-9415-c7d2546a9fd7/registry-server/0.log" Jan 23 15:31:43 crc kubenswrapper[4771]: I0123 15:31:43.711572 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerStarted","Data":"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa"} Jan 23 15:31:43 crc kubenswrapper[4771]: I0123 15:31:43.742196 4771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8xrmz" podStartSLOduration=3.853054996 podStartE2EDuration="9.742127382s" podCreationTimestamp="2026-01-23 15:31:34 +0000 UTC" firstStartedPulling="2026-01-23 15:31:36.610939186 +0000 UTC m=+7137.633476811" lastFinishedPulling="2026-01-23 15:31:42.500011572 +0000 UTC m=+7143.522549197" observedRunningTime="2026-01-23 15:31:43.732089156 +0000 UTC m=+7144.754626781" watchObservedRunningTime="2026-01-23 15:31:43.742127382 +0000 UTC m=+7144.764665007" Jan 23 15:31:43 crc kubenswrapper[4771]: I0123 15:31:43.897978 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ckzcq_3bc17f58-61bd-4595-8d2b-83f9c2cc4514/registry-server/0.log" Jan 23 15:31:44 crc kubenswrapper[4771]: I0123 15:31:44.228887 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:31:44 crc kubenswrapper[4771]: E0123 15:31:44.229193 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:45 crc kubenswrapper[4771]: I0123 15:31:45.186382 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:45 crc kubenswrapper[4771]: I0123 15:31:45.186691 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:46 crc kubenswrapper[4771]: I0123 15:31:46.241950 4771 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8xrmz" podUID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" containerName="registry-server" probeResult="failure" output=< Jan 23 15:31:46 crc kubenswrapper[4771]: timeout: failed to connect service ":50051" within 1s Jan 23 15:31:46 crc kubenswrapper[4771]: > Jan 23 15:31:55 crc kubenswrapper[4771]: I0123 15:31:55.228235 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:31:55 crc kubenswrapper[4771]: E0123 15:31:55.228980 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:31:55 crc kubenswrapper[4771]: I0123 15:31:55.242955 4771 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:55 crc kubenswrapper[4771]: I0123 15:31:55.304686 4771 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:57 crc kubenswrapper[4771]: I0123 15:31:57.447238 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-w6ztm_6b8c9e01-91e4-466a-bcca-f2302f8cf535/prometheus-operator/0.log" Jan 23 15:31:57 crc kubenswrapper[4771]: I0123 15:31:57.503326 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c9c49f46c-6hkmq_f63b51b7-c5c0-48e1-bd12-7d0cb3dfc23b/prometheus-operator-admission-webhook/0.log" Jan 23 15:31:57 crc kubenswrapper[4771]: I0123 15:31:57.536205 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c9c49f46c-vnrk8_1b61dbbd-9e69-43e6-9a83-68115a11bef6/prometheus-operator-admission-webhook/0.log" Jan 23 15:31:57 crc kubenswrapper[4771]: I0123 15:31:57.701458 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tldhc_e93a101d-0d44-4b57-8ec6-bd9911f2e61b/operator/0.log" Jan 23 15:31:57 crc kubenswrapper[4771]: I0123 15:31:57.703641 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-hp46t_4d0ca4c2-0c2e-438c-a300-71ec0d624905/perses-operator/0.log" Jan 23 15:31:58 crc kubenswrapper[4771]: I0123 15:31:58.418827 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:31:58 crc kubenswrapper[4771]: I0123 15:31:58.419055 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8xrmz" podUID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" containerName="registry-server" containerID="cri-o://d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa" gracePeriod=2 Jan 23 15:31:58 crc kubenswrapper[4771]: I0123 15:31:58.962844 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.038232 4771 generic.go:334] "Generic (PLEG): container finished" podID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" containerID="d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa" exitCode=0 Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.038286 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerDied","Data":"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa"} Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.038370 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xrmz" event={"ID":"dc77c6a2-d296-45fe-af4c-0a9d42b47761","Type":"ContainerDied","Data":"4fade07a9e4ab6a35b571506b486bc31d77d0640e4a2a7c5c55de7b9cb55b0d1"} Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.038402 4771 scope.go:117] "RemoveContainer" containerID="d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.038686 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xrmz" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.076803 4771 scope.go:117] "RemoveContainer" containerID="803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.079439 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content\") pod \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.079572 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities\") pod \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.079775 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7gj\" (UniqueName: \"kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj\") pod \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\" (UID: \"dc77c6a2-d296-45fe-af4c-0a9d42b47761\") " Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.085276 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities" (OuterVolumeSpecName: "utilities") pod "dc77c6a2-d296-45fe-af4c-0a9d42b47761" (UID: "dc77c6a2-d296-45fe-af4c-0a9d42b47761"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.089922 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj" (OuterVolumeSpecName: "kube-api-access-7k7gj") pod "dc77c6a2-d296-45fe-af4c-0a9d42b47761" (UID: "dc77c6a2-d296-45fe-af4c-0a9d42b47761"). InnerVolumeSpecName "kube-api-access-7k7gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.155047 4771 scope.go:117] "RemoveContainer" containerID="3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.183768 4771 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.183812 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7gj\" (UniqueName: \"kubernetes.io/projected/dc77c6a2-d296-45fe-af4c-0a9d42b47761-kube-api-access-7k7gj\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.200103 4771 scope.go:117] "RemoveContainer" containerID="d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa" Jan 23 15:31:59 crc kubenswrapper[4771]: E0123 15:31:59.200703 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa\": container with ID starting with d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa not found: ID does not exist" containerID="d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.200781 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa"} err="failed to get container status \"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa\": rpc error: code = NotFound desc = could not find container \"d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa\": container with ID starting with d81765574d1b77ea7ffcdc28a6760297c75e33f3d0d9e50d2fa980e5e8d0bcfa not found: ID does not exist" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.200828 4771 scope.go:117] "RemoveContainer" containerID="803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b" Jan 23 15:31:59 crc kubenswrapper[4771]: E0123 15:31:59.201533 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b\": container with ID starting with 803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b not found: ID does not exist" containerID="803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.201624 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b"} err="failed to get container status \"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b\": rpc error: code = NotFound desc = could not find container \"803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b\": container with ID starting with 803a92966b57064428de3691c489ae5b2bb537f19453ff76f95ac5d4f84a686b not found: ID does not exist" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.201677 4771 scope.go:117] "RemoveContainer" containerID="3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08" Jan 23 15:31:59 crc kubenswrapper[4771]: E0123 15:31:59.202107 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08\": container with ID starting with 3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08 not found: ID does not exist" containerID="3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.202184 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08"} err="failed to get container status \"3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08\": rpc error: code = NotFound desc = could not find container \"3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08\": container with ID starting with 3145afd90beb48e20d96efed4cc80665fe3531c286b95d74b1f0a0424b848d08 not found: ID does not exist" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.256509 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc77c6a2-d296-45fe-af4c-0a9d42b47761" (UID: "dc77c6a2-d296-45fe-af4c-0a9d42b47761"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.291668 4771 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc77c6a2-d296-45fe-af4c-0a9d42b47761-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.375806 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:31:59 crc kubenswrapper[4771]: I0123 15:31:59.385391 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8xrmz"] Jan 23 15:32:01 crc kubenswrapper[4771]: I0123 15:32:01.240286 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc77c6a2-d296-45fe-af4c-0a9d42b47761" path="/var/lib/kubelet/pods/dc77c6a2-d296-45fe-af4c-0a9d42b47761/volumes" Jan 23 15:32:08 crc kubenswrapper[4771]: I0123 15:32:08.228066 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:32:08 crc kubenswrapper[4771]: E0123 15:32:08.229171 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:32:22 crc kubenswrapper[4771]: I0123 15:32:22.229717 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:32:22 crc kubenswrapper[4771]: E0123 15:32:22.231380 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:32:35 crc kubenswrapper[4771]: I0123 15:32:35.228297 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:32:35 crc kubenswrapper[4771]: E0123 15:32:35.229365 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:32:49 crc kubenswrapper[4771]: I0123 15:32:49.240596 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:32:49 crc kubenswrapper[4771]: E0123 15:32:49.241397 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:33:01 crc kubenswrapper[4771]: I0123 15:33:01.227841 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:33:01 crc kubenswrapper[4771]: E0123 15:33:01.228766 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:33:15 crc kubenswrapper[4771]: I0123 15:33:15.228474 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:33:15 crc kubenswrapper[4771]: E0123 15:33:15.229228 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:33:29 crc kubenswrapper[4771]: I0123 15:33:29.235668 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:33:29 crc kubenswrapper[4771]: E0123 15:33:29.236533 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:33:42 crc kubenswrapper[4771]: I0123 15:33:42.228437 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:33:42 crc kubenswrapper[4771]: E0123 15:33:42.229378 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:33:57 crc kubenswrapper[4771]: I0123 15:33:57.232613 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:33:57 crc kubenswrapper[4771]: E0123 15:33:57.233458 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:34:08 crc kubenswrapper[4771]: I0123 15:34:08.231592 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:34:08 crc kubenswrapper[4771]: E0123 15:34:08.232842 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:34:12 crc kubenswrapper[4771]: I0123 15:34:12.606943 4771 generic.go:334] "Generic (PLEG): container finished" podID="fc59aab3-99ec-45d0-847c-3a1751073555" containerID="44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00" exitCode=0 Jan 23 15:34:12 crc kubenswrapper[4771]: I0123 15:34:12.607080 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2vtmc/must-gather-rffk9" event={"ID":"fc59aab3-99ec-45d0-847c-3a1751073555","Type":"ContainerDied","Data":"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00"} Jan 23 15:34:12 crc kubenswrapper[4771]: I0123 15:34:12.610187 4771 scope.go:117] "RemoveContainer" containerID="44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00" Jan 23 15:34:13 crc kubenswrapper[4771]: I0123 15:34:13.077937 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2vtmc_must-gather-rffk9_fc59aab3-99ec-45d0-847c-3a1751073555/gather/0.log" Jan 23 15:34:21 crc kubenswrapper[4771]: I0123 15:34:21.229184 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:34:21 crc kubenswrapper[4771]: E0123 15:34:21.230217 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:34:21 crc kubenswrapper[4771]: I0123 15:34:21.915472 4771 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2vtmc/must-gather-rffk9"] Jan 23 15:34:21 crc kubenswrapper[4771]: I0123 15:34:21.916191 4771 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2vtmc/must-gather-rffk9" podUID="fc59aab3-99ec-45d0-847c-3a1751073555" containerName="copy" containerID="cri-o://c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf" gracePeriod=2 Jan 23 15:34:21 crc kubenswrapper[4771]: I0123 15:34:21.951696 4771 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2vtmc/must-gather-rffk9"] Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.379780 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2vtmc_must-gather-rffk9_fc59aab3-99ec-45d0-847c-3a1751073555/copy/0.log" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.380620 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.561350 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output\") pod \"fc59aab3-99ec-45d0-847c-3a1751073555\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.561469 4771 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxvw7\" (UniqueName: \"kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7\") pod \"fc59aab3-99ec-45d0-847c-3a1751073555\" (UID: \"fc59aab3-99ec-45d0-847c-3a1751073555\") " Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.575226 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7" (OuterVolumeSpecName: "kube-api-access-fxvw7") pod "fc59aab3-99ec-45d0-847c-3a1751073555" (UID: "fc59aab3-99ec-45d0-847c-3a1751073555"). InnerVolumeSpecName "kube-api-access-fxvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.666151 4771 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxvw7\" (UniqueName: \"kubernetes.io/projected/fc59aab3-99ec-45d0-847c-3a1751073555-kube-api-access-fxvw7\") on node \"crc\" DevicePath \"\"" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.748922 4771 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2vtmc_must-gather-rffk9_fc59aab3-99ec-45d0-847c-3a1751073555/copy/0.log" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.749679 4771 generic.go:334] "Generic (PLEG): container finished" podID="fc59aab3-99ec-45d0-847c-3a1751073555" containerID="c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf" exitCode=143 Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.749752 4771 scope.go:117] "RemoveContainer" containerID="c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.749952 4771 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2vtmc/must-gather-rffk9" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.798855 4771 scope.go:117] "RemoveContainer" containerID="44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.826770 4771 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fc59aab3-99ec-45d0-847c-3a1751073555" (UID: "fc59aab3-99ec-45d0-847c-3a1751073555"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.870945 4771 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fc59aab3-99ec-45d0-847c-3a1751073555-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.887743 4771 scope.go:117] "RemoveContainer" containerID="c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf" Jan 23 15:34:22 crc kubenswrapper[4771]: E0123 15:34:22.893312 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf\": container with ID starting with c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf not found: ID does not exist" containerID="c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.893354 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf"} err="failed to get container status \"c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf\": rpc error: code = NotFound desc = could not find container \"c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf\": container with ID starting with c1748953ff7844fed52da6e01e01ad519a24ff592e614ce108d8f5f298660ccf not found: ID does not exist" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.893376 4771 scope.go:117] "RemoveContainer" containerID="44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00" Jan 23 15:34:22 crc kubenswrapper[4771]: E0123 15:34:22.893752 4771 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00\": container with ID starting with 44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00 not found: ID does not exist" containerID="44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00" Jan 23 15:34:22 crc kubenswrapper[4771]: I0123 15:34:22.893773 4771 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00"} err="failed to get container status \"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00\": rpc error: code = NotFound desc = could not find container \"44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00\": container with ID starting with 44738738f4d3b201b4aa20dfe80141cb4c46ba1a1f3d76aac9ddb36174832a00 not found: ID does not exist" Jan 23 15:34:23 crc kubenswrapper[4771]: I0123 15:34:23.239968 4771 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc59aab3-99ec-45d0-847c-3a1751073555" path="/var/lib/kubelet/pods/fc59aab3-99ec-45d0-847c-3a1751073555/volumes" Jan 23 15:34:36 crc kubenswrapper[4771]: I0123 15:34:36.228686 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:34:36 crc kubenswrapper[4771]: E0123 15:34:36.229836 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:34:49 crc kubenswrapper[4771]: I0123 15:34:49.236376 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:34:49 crc kubenswrapper[4771]: E0123 15:34:49.237979 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:35:02 crc kubenswrapper[4771]: I0123 15:35:02.231327 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:35:02 crc kubenswrapper[4771]: E0123 15:35:02.232835 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:35:02 crc kubenswrapper[4771]: I0123 15:35:02.600048 4771 scope.go:117] "RemoveContainer" containerID="43f63381fd7aee06c77cc71d8274a1eb08336c7344f4cb052cf15a1709b958df" Jan 23 15:35:14 crc kubenswrapper[4771]: I0123 15:35:14.229371 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:35:14 crc kubenswrapper[4771]: E0123 15:35:14.230237 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:35:25 crc kubenswrapper[4771]: I0123 15:35:25.229200 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:35:25 crc kubenswrapper[4771]: E0123 15:35:25.230321 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:35:38 crc kubenswrapper[4771]: I0123 15:35:38.228396 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:35:38 crc kubenswrapper[4771]: E0123 15:35:38.229210 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:35:52 crc kubenswrapper[4771]: I0123 15:35:52.228879 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:35:52 crc kubenswrapper[4771]: E0123 15:35:52.230278 4771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z299d_openshift-machine-config-operator(cd8e44e1-6639-45d3-927f-347dc88e96c6)\"" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" Jan 23 15:36:05 crc kubenswrapper[4771]: I0123 15:36:05.229741 4771 scope.go:117] "RemoveContainer" containerID="736cde756600314c9aa8fe7b2c51e52783033183219d2462d69b2c1dbc576b96" Jan 23 15:36:05 crc kubenswrapper[4771]: I0123 15:36:05.952496 4771 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z299d" event={"ID":"cd8e44e1-6639-45d3-927f-347dc88e96c6","Type":"ContainerStarted","Data":"555fb0fceac199bb2ae648c29c6ed37c5780baf196d52c0b5000fb00b41fe043"} Jan 23 15:38:30 crc kubenswrapper[4771]: I0123 15:38:30.312259 4771 patch_prober.go:28] interesting pod/machine-config-daemon-z299d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 15:38:30 crc kubenswrapper[4771]: I0123 15:38:30.312919 4771 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z299d" podUID="cd8e44e1-6639-45d3-927f-347dc88e96c6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"